Commit graph

43 commits

Author SHA1 Message Date
Eelco Dolstra 462421d345 Backport libfetchers from the flakes branch
This provides a pluggable mechanism for defining new fetchers. It adds
a builtin function 'fetchTree' that generalizes existing fetchers like
'fetchGit', 'fetchMercurial' and 'fetchTarball'. 'fetchTree' takes a
set of attributes, e.g.

  fetchTree {
    type = "git";
    url = "https://example.org/repo.git";
    ref = "some-branch";
    rev = "abcdef...";
  }

The existing fetchers are just wrappers around this. Note that the
input attributes to fetchTree are the same as flake input
specifications and flake lock file entries.

All fetchers share a common cache stored in
~/.cache/nix/fetcher-cache-v1.sqlite. This replaces the ad hoc caching
mechanisms in fetchGit and download.cc (e.g. ~/.cache/nix/{tarballs,git-revs*}).

This also adds support for Git worktrees (c169ea5904).
2020-04-07 09:03:14 +02:00
Eelco Dolstra 04bf9acd22
Remove #include 2019-11-07 10:12:35 +01:00
Eelco Dolstra 3a022d4599 Shut up some warnings
(cherry picked from commit 99e8e58f2d)
2019-09-22 21:57:05 +02:00
Graham Christensen a02457db71
conf: stalled-download-timeout: make tunable
Make curl's low speed limit configurable via stalled-download-timeout.
Before, this limit was five minutes without receiving a single byte.
This is much too long as if the remote end may not have even
acknowledged the HTTP request.
2019-08-08 10:22:13 -04:00
Eelco Dolstra 03f09e1d18
Revert "Fix 'error 9 while decompressing xz file'"
This reverts commit 78fa47a7f0.
2019-07-10 19:46:15 +02:00
Eelco Dolstra f8b30338ac
Refactor downloadCached() interface
(cherry picked from commit df3f5a78d5)
2019-06-24 22:12:26 +02:00
Eelco Dolstra 7b9c68766d
Add '--no-net' convenience flag
This flag

* Disables substituters.

* Sets the tarball-ttl to infinity (ensuring e.g. that the flake
  registry and any downloaded flakes are considered current).

* Disables retrying downloads and sets the connection timeout to the
  minimum. (So it doesn't completely disable downloads at the moment.)

(cherry picked from commit 8ea842260b)
2019-06-24 22:07:29 +02:00
Eelco Dolstra 78fa47a7f0
Fix 'error 9 while decompressing xz file'
Once we've started writing data to a Sink, we can't restart a download
request, because then we end up writing duplicate data to the
Sink. Therefore we shouldn't handle retries in Downloader but at a
higher level (in particular, in copyStorePath()).

Fixes #2952.

(cherry picked from commit a67cf5a358)
2019-06-24 21:59:51 +02:00
Eelco Dolstra b43e1e186e
CachedDownloadResult: Include store path
Also, make fetchGit and fetchMercurial update allowedPaths properly.

(Maybe the evaluator, rather than the caller of the evaluator, should
apply toRealPath(), but that's a bigger change.)

(cherry picked from commit 5c34d66538)
2019-06-24 21:59:27 +02:00
Eelco Dolstra dc29e9fb47
downloadCached: Return ETag
(cherry picked from commit 529add316c)
2019-06-24 21:58:33 +02:00
Eelco Dolstra d3761f5f8b
Fix Brotli decompression in 'nix log'
This didn't work anymore since decompression was only done in the
non-coroutine case.

Decompressors are now sinks, just like compressors.

Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':

  if (ret != BZ_OK)
      CompressionError("error while compressing bzip2 file");
2018-08-06 15:40:29 +02:00
Eelco Dolstra 4361a4331f
Fix reporting of HTTP body size when a result callback is used 2018-08-06 11:31:14 +02:00
Eelco Dolstra a2ec7a3bfd
Further improve upload messages 2018-06-05 14:37:26 +02:00
Eelco Dolstra e87e4a60d6
Make HttpBinaryCacheStore::narFromPath() run in constant memory
This reduces memory consumption of

  nix copy --from https://cache.nixos.org --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79

from 176 MiB to 82 MiB. (The remaining memory is probably due to xz
decompression overhead.)

Issue https://github.com/NixOS/nix/issues/1681.
Issue https://github.com/NixOS/nix/issues/1969.
2018-05-30 13:42:29 +02:00
Eelco Dolstra 81ea8bd5ce
Simplify the callback mechanism 2018-05-30 13:34:37 +02:00
Yorick b9289e4855
make sure not to use cached channels for nix-channel --update
fixes #1964
2018-05-09 16:18:20 +02:00
Asad Saeeduddin be54f4a0b6 Wrap thread local in function for Cygwin
Fixes #1826. See #1352 for a previous instance of a similar change.
2018-03-12 00:56:41 -04:00
Eelco Dolstra 30370f168f
Cleanup 2018-01-31 15:14:03 +01:00
Shea Levy 1d5d277ac7
HttpBinaryCacheStore: Support upsertFile with PUT.
Some servers, such as Artifactory, allow uploading with PUT and BASIC
auth. This allows nix copy to work to upload binaries to those
servers.

Worked on together with @adelbertc
2018-01-26 11:12:30 -08:00
Eelco Dolstra aca4f7dff0
Don't remove Content-Encoding in fetchurl / nix-prefetch-url
Fixes #1568.
2017-09-18 11:07:28 +02:00
Eelco Dolstra c137c0a5eb
Allow activities to be nested
In particular, this allows more relevant activities ("substituting X")
to supersede inferior ones ("downloading X").
2017-08-25 17:49:40 +02:00
Eelco Dolstra b01d62285c
Improve progress indicator 2017-05-16 16:09:57 +02:00
Dan Peebles e43e8be8e7 Default to 5 download retries
This should help certain downloaders that don't request anything special
for the number of retries, like nix-channel.
2017-04-10 09:22:24 -04:00
Eelco Dolstra 8b1d65bebe
S3BinaryCacheStore: Support compression of narinfo and log files
You can now set the store parameter "text-compression=br" to compress
textual files in the binary cache (i.e. narinfo and logs) using
Brotli. This sets the Content-Encoding header; the extension of
compressed files is unchanged.

You can separately specify the compression of log files using
"log-compression=br". This is useful when you don't want to compress
narinfo files for backward compatibility.
2017-03-15 16:49:28 +01:00
Eelco Dolstra 9ff9c3f2f8
Add support for s3:// URIs
This adds support for s3:// URIs in all places where Nix allows URIs,
e.g. in builtins.fetchurl, builtins.fetchTarball, <nix/fetchurl.nix>
and NIX_PATH. It allows fetching resources from private S3 buckets,
using credentials obtained from the standard places (i.e. AWS_*
environment variables, ~/.aws/credentials and the EC2 metadata
server). This may not be super-useful in general, but since we already
depend on aws-sdk-cpp, it's a cheap feature to add.
2017-02-14 14:20:00 +01:00
Eelco Dolstra 215b70f51e
Revert "Get rid of unicode quotes (#1140)"
This reverts commit f78126bfd6. There
really is no need for such a massive change...
2016-11-26 00:38:01 +01:00
Guillaume Maudoux f78126bfd6 Get rid of unicode quotes (#1140) 2016-11-25 15:48:27 +01:00
Eelco Dolstra 75989bdca7 Make computeFSClosure() single-threaded again
The fact that queryPathInfo() is synchronous meant that we needed a
thread for every concurrent binary cache lookup, even though they end
up being handled by the same download thread. Requiring hundreds of
threads is not a good idea. So now there is an asynchronous version of
queryPathInfo() that takes a callback function to process the
result. Similarly, enqueueDownload() now takes a callback rather than
returning a future.

Thus, a command like

  nix path-info --store https://cache.nixos.org/ -r /nix/store/slljrzwmpygy1daay14kjszsr9xix063-nixos-16.09beta231.dccf8c5

that returns 4941 paths now takes 1.87s using only 2 threads (the main
thread and the downloader thread). (This is with a prewarmed
CloudFront.)
2016-09-16 18:54:14 +02:00
Eelco Dolstra 90ad02bf62 Enable HTTP/2 support
The binary cache store can now use HTTP/2 to do lookups. This is much
more efficient than HTTP/1.1 due to multiplexing: we can issue many
requests in parallel over a single TCP connection. Thus it's no longer
necessary to use a bunch of concurrent TCP connections (25 by
default).

For example, downloading 802 .narinfo files from
https://cache.nixos.org/, using a single TCP connection, takes 11.8s
with HTTP/1.1, but only 0.61s with HTTP/2.

This did require a fairly substantial rewrite of the Downloader class
to use the curl multi interface, because otherwise curl wouldn't be
able to do multiplexing for us. As a bonus, we get connection reuse
even with HTTP/1.1. All downloads are now handled by a single worker
thread. Clients call Downloader::enqueueDownload() to tell the worker
thread to start the download, getting a std::future to the result.
2016-09-14 16:36:02 +02:00
Shea Levy dfe0938614 download.hh: Fix conflicts from nix-channel-c++ merge 2016-08-31 09:57:56 -04:00
Shea Levy 572aba284a Merge branch 'nix-channel-c++' 2016-08-31 09:49:24 -04:00
Eelco Dolstra 6631a6e1a1 Increase the sleep time between download retries 2016-08-30 15:48:24 +02:00
Shea Levy d52d391164 builtins.fetch{url,tarball}: Allow name attribute 2016-08-15 07:37:11 -04:00
Shea Levy 59124228b3 nix-channel: implement in c++ 2016-08-11 11:34:43 -04:00
Eelco Dolstra 66adbdfd97 HttpBinaryCacheStore: Retry on transient HTTP errors
This makes us more robust against 500 errors from CloudFront or S3
(assuming the 500 error isn't cached by CloudFront...).
2016-08-10 18:08:23 +02:00
Eelco Dolstra 06bbfb6004 builtins.{fetchurl,fetchTarball}: Support a sha256 attribute
Also, allow builtins.{fetchurl,fetchTarball} in restricted mode if a
hash is specified.
2016-07-26 21:16:52 +02:00
Eelco Dolstra d1b0909894 BinaryCacheStore::readFile(): Return a shared_ptr to a string
This allows readFile() to indicate that a file doesn't exist, and
might eliminate some large string copying.
2016-04-15 15:39:48 +02:00
Eelco Dolstra e9c50064b5 Add an HTTP binary cache store
Allowing stuff like

  NIX_REMOTE=https://cache.nixos.org nix-store -qR /nix/store/x1p1gl3a4kkz5ci0nfbayjqlqmczp1kq-geeqie-1.1

or

  NIX_REMOTE=https://cache.nixos.org nix-store --export /nix/store/x1p1gl3a4kkz5ci0nfbayjqlqmczp1kq-geeqie-1.1 | nix-store --import
2016-02-29 18:15:20 +01:00
Eelco Dolstra fa7cd5369b StoreAPI -> Store
Calling a class an API is a bit redundant...
2016-02-04 14:48:42 +01:00
Eelco Dolstra c10c61449f Eliminate the "store" global variable
Also, move a few free-standing functions into StoreAPI and Derivation.

Also, introduce a non-nullable smart pointer, ref<T>, which is just a
wrapper around std::shared_ptr ensuring that the pointer is never
null. (For reference-counted values, this is better than passing a
"T&", because the latter doesn't maintain the refcount. Usually, the
caller will have a shared_ptr keeping the value alive, but that's not
always the case, e.g., when passing a reference to a std::thread via
std::bind.)
2016-02-04 14:28:26 +01:00
Eelco Dolstra 01615b5f63 Show progress indicator for builtin fetchurl 2015-10-21 15:14:42 +02:00
Eelco Dolstra 5db358d4d7 Disable TLS verification for builtin fetchurl
This makes it consistent with the Nixpkgs fetchurl and makes it work
in chroots. We don't need verification because the hash of the result
is checked anyway.
2015-10-21 15:14:42 +02:00
Eelco Dolstra 0a2bee307b Make <nix/fetchurl.nix> a builtin builder
This ensures that 1) the derivation doesn't change when Nix changes;
2) the derivation closure doesn't contain Nix and its dependencies; 3)
we don't have to rely on ugly chroot hacks.
2015-07-20 04:38:46 +02:00
Renamed from src/libexpr/download.hh (Browse further)