22

我在天马彩票平台输了三十万If I don't want to have to download the files found in a specific url path manually, what options do I have? Using wildcards fail:

$ wget 'http://www.shinken-monitoring.org/pub/debian/*deb'
Warning: wildcards not supported in HTTP.
....

我在天马彩票平台输了三十万This of course assumes that I don't know the filenames in advance.

24

Try this:

wget -r -l1 --no-parent -A ".deb" http://www.shinken-monitoring.org/pub/debian/

-r recursively
-l1 to a maximum depth of 1
--no-parent ignore links to a higher directory
-A "*.deb" your pattern

  • Although it's true in this case - this does assume that the web server returns a page at the URL that lists all the files. If it returns an index page without any of the mentioned files, wget can magically get them. – EightBitTony Aug 17 '11 at 18:57
  • 7
    I think that an option -nd also will be useful here. It allows to download matched files to the current directory without creating a hierarchy of directories. – annndrey Jul 10 '14 at 13:54

Your Answer

By clicking “Post Your Answer”, you agree to our , and

Not the answer you're looking for? Browse other questions tagged or ask your own question.