Just how can I take care of conserved full website and also their directory sites (e.g n.html and also n_files) in Nautilus
Making use of an approximate internet internet browser, e.g. firefox, you can conserve a website (full website), for which it conserves the html documents, claim n.html, and also website components in an equivalent directory site, n_files.
In Win7, if you replicate, relocate, relabel either the folder of the html documents, they are changed as a solitary device. Nonetheless, Nautilus (the default Gnome documents supervisor), does refrain this.
Exists a Nautilus manuscript readily available to enable this capability? Exists a different means to attain the very same point?
I intend the relabeling capability in Explorer is based upon unique features in the filesystem that Explorer acknowledges (that's just how a lot of such capability in traveler jobs). It would certainly be feasible to implement something comparable in GNOME/ Nautilus (given you're making use of a filesystem that sustains extensive features), yet AFAIK it does not exist presently.
An additional opportunity would certainly be to write a nautilus plugin that makes use of some heuristics to identify such html documents+matching directory site and also do what you desire, yet once more I do not recognize of an existing remedy (it's additionally not unimportant to implement appropriately).
I recommend making use of the UnMHT addon for Firefox to conserve the web page in one documents (possibly there is something comparable for various other internet browsers also).
Unlike the Mozilla Archive Format (also known as MAF), MHT (also known as MHTML) is standard in a main requirements (RFC2557) and also it is additionally sustained by IE and also various other applications, that makes it extra future - evidence. There are additionally MHT - watching plugins for Opera & Safari.
http://www.unmht.org/en_index.html (Firefox expansion+visitors for Opera, Safari & QuickLook)
The Firefox addon is additionally on Mozilla's addon - website.
You can download and install the whole point making use of wget.
wget -r --level=0 --convert-links --page-requisites --no-parent http://url.com
- r suggests it's recursive
- - degree = 0 suggests it drops a boundless quantity of degrees (so http://url.com/pictures/babes/pics.html will certainly be conserved, not simply the leading degree web page)
- - transform - web links suggests it transforms the web links from
<a href="http://url.com/page.html">link</a> to
- - no - moms and dad suggests it does not download and install web pages that are "higher". So if you desire http://url.com/graphics/index.html and also "below", http://url.com/index.html will not be downloaded and install.