Make sure that people who run Nix in non-interactive mode (and so don't have the possibility to interactively accept the individual flake configuration settings) are aware of this flag.
Fix#7086
These settings seem harmless, they control the same polling
functionality that timeout does, but with different behavior. Should
be safe for untrusted users to pass in.
I just had a colleague get confused by the previous phrase for good
reason. "valid" sounds like an *objective* criterion, e.g. and *invalid
signature* would be one that would be trusted by no one, e.g. because it
misformatted or something.
What is actually going is that there might be a signature which is
perfectly valid to *someone else*, but not to the user, because they
don't trust the corresponding public key. This is a *subjective*
criterion, because it depends on the arbitrary and personal choice of
which public keys to trust.
I therefore think "trustworthy" is a better adjective to use. Whether
something is worthy of trust is clearly subjective, and then "trust"
within that word nicely evokes `trusted-public-keys` and friends.
The history is not critical to the functionality of nix repl, so it's
enough to warn here, rather than refuse to start if the directory Nix
thinks the history should live in can't be created.
- call close explicitly in writeFile to prevent the close exception
from being ignored
- fsync after writing schema file to flush data to disk
- fsync schema file parent to flush metadata to disk
https://github.com/NixOS/nix/issues/7064
Remove the `verify TLS: Nix CA file = 'blah'` message that Nix used to print when fetching anything as it's both useless (`libcurl` prints the same info in its logs) and misleading (gives the impression that a new TLS connection is being established which might not be the case because of multiplexing. See #7011 )
This commit adds an optional `__impure` parameter to fetchurl.nix, which allows
the caller to use `libfetcher`'s fetcher in an impure derivation. This allows
nixpkgs' patch-normalizing fetcher (fetchpatch) to be rewritten to use nix's
internal fetchurl, thereby eliminating the awkward "you can't use fetchpatch
here" banners scattered all over the place.
See also: https://github.com/NixOS/nixpkgs/pull/188587
Implements the approach suggested by feedback on PR #6994, where
tempdir paths are created in the store (now with an exclusive lock).
As part of this work, the currently-broken and unused
`createTempDirInStore` function is updated to create an exclusive lock
on the temp directory in the store.
The GC now makes a non-blocking attempt to lock any store directories
that "look like" the temp directories created by this function, and if
it can't acquire one, ignores the directory.
Stdenv sets this to a bash that doesn't have readline/completion
support, so running 'nix (develop|shell)' inside a 'nix develop' gives
you a crippled shell. So let's just ignore the derivation's $SHELL.
This could break interactive use of build phases that use $SHELL, but
they appear to be fairly rare.
Disables the SA_RESTART behavior on macOS which causes:
> Restarting of pending calls is requested by setting the SA_RESTART bit
> in sa_flags. The affected system calls include read(2), write(2),
> sendto(2), recvfrom(2), sendmsg(2) and recvmsg(2) on a communications
> channel or a slow device (such as a terminal, but not a regular file)
> and during a wait(2) or ioctl(2).
From: https://man.openbsd.org/sigaction#SA_RESTART
This being set on macOS caused a bug where read() calls to the daemon
socket were blocking after a SIGINT was received. As a result,
checkInterrupt was never reached even though the signal was received
by the signal handler thread.
On Linux, SA_RESTART is disabled by default. This probably effects
other BSDs but I don’t have the ability to test it there right now.
readDerivation is pretty slow, and while it may not be significant for
some use cases, on things like ghc-nix where we have thousands of
derivations is really slows things down.
So, this just doesn’t do the impure derivation check if the impure
derivation experimental feature is disabled. Perhaps we could cache
the result of isPure() and keep the check, but this is a quick fix to
for the slowdown introduced with impure derivations features in 2.8.0.
this simplifies the setup a lot, and avoids weird looking `./file.md`
links showing up.
it also does not show regular URLs any more. currently the command
reference only has few of them, and not showing them in the offline
documentation is hopefully not a big deal.
instead of building more special-case solutions, clumsily preprocessing
the input, or issuing verbal rules on dealing with URLs, should better
be solved sustainably by not rendering relative links in `lowdown`:
https://github.com/kristapsdz/lowdown/issues/105
This was caused by -L calling setLogFormat() again, which caused the
creation of a new progress bar without destroying the old one. So we
had two progress bars clobbering each other.
We should change 'logger' to be a smart pointer, but I'll do that in a
future PR.
Fixes#6931.
98e361ad4c introduced a regression where
previously stored attributes were replaced by placeholders. As a
result, a command like 'nix build nixpkgs#hello' had to be executed at
least twice to get caching.
This code does not seem necessary for suggestions to work.
This issue made it impossible for clients using a serve protocol of
version <= 2.3 to use the `cmdBuildDerivation` command of servers using
a protocol of version >= 2.6. The faulty version check makes the server
send back build outputs that the client is not expecting.
This hang for some reason didn't trigger in the Nix build, but did
running 'make installcheck' interactively. What happened:
* Store::addMultipleToStore() calls a SinkToSource object to copy a
path, which in turn calls LegacySSHStore::narFromPath(), which
acquires a connection.
* The SinkToSource object is not destroyed after the last bytes has
been read, so the coroutine's stack is still alive and its
destructors are not run. So the connection is not released.
* Then when the next path is copied, because max-connections = 1,
LegacySSHStore::narFromPath() hangs forever waiting for a connection
to be released.
The fix is to make sure that the source object is destroyed when we're
done with it.
Makes `printValueAsJSON` not copy paths to the store for `nix eval
--json`, `nix-instantiate --eval --json` and `nix-env --json`.
Fixes https://github.com/NixOS/nix/issues/5612
RewritingSink can handle being fed input where a reference crosses a
chunk boundary. we don't need to load the whole source into memory, and
in fact *not* loading the whole source lets nix build FODs that do not
fit into memory (eg fetchurl'ing data files larger than system memory).
Some activities are numerous but usually very short (e.g. copying a
source file to the store) which would cause a lot of flickering. So
only show activities that have been running for at least 10 ms.
Rather than directly copying the source to its dest, copy it first to a
temporary location, and eventually move that temporary.
That way, the move is at least atomic from the point-of-view of the destination
The recursive copy from the stl doesn’t exactly do what we need because
1. It doesn’t delete things as we go
2. It doesn’t keep the mtime, which change the nars
So re-implement it ourselves. A bit dull, but that way we have what we want
In `nix::rename`, if the call to `rename` fails with `EXDEV` (failure
because the source and the destination are in a different filesystems)
switch to copying and removing the source.
To avoid having to re-implement the copy manually, I switched the
function to use the c++17 `filesystem` library (which has a `copy`
function that should do what we want).
Fix#6262
Once a derivation goal has been completed, we check whether or not
this goal was meant to be repeated to check its output.
An early return branch was preventing the worker to reach that repeat
code branch, hence breaking the --check command (#2619).
It seems like this early return branch is an artifact of a passed
refactoring. As far as I can tell, buildDone's main branch also
cleanup the tmp directory before returning.
By default, Nix sets the "cores" setting to the number of CPUs which are
physically present on the machine. If cgroups are used to limit the CPU
and memory consumption of a large Nix build, the OOM killer may be
invoked.
For example, consider a GitLab CI pipeline which builds a large software
package. The GitLab runner spawns a container whose CPU is limited to 4
cores and whose memory is limited to 16 GiB. If the underlying machine
has 64 cores, Nix will invoke the build with -j64. In many cases, that
level of parallelism will invoke the OOM killer and the build will
completely fail.
This change sets the default value of "cores" to be
ceil(cpu_quota / cpu_period), with a fallback to
std:🧵:hardware_concurrency() if cgroups v2 is not detected.