This test suite was in desperate need of using the parameterization
available with gtest, and was a bunch of useless duplicated code. At
least now it's not duplicated code, though it still probably should be
more full of property tests.
Change-Id: Ia8ccee7ef4f02b2fa40417b79aa8c8f0626ea479
Fixes:
- Identifiers starting with _ are prohibited
- Some driveby header dependency cleaning which wound up with doing some
extra fixups.
- Fucking C style casts, man. C++ made these 1000% worse by letting you
also do memory corruption with them with references.
- Remove casts to Expr * where ExprBlackHole is an incomplete type by
introducing an explicitly-cast eBlackHoleAddr as Expr *.
- An incredibly illegal cast of the text bytes of the StorePath hash
into a size_t directly. You can't DO THAT.
Replaced with actually parsing the hash so we get 100% of the bits
being entropy, then memcpying the start of the hash. If this shows
up in a profile we should just make the hash parser faster with a
lookup table or something sensible like that.
- This horrendous bit of UB which I thankfully slapped a deprecation
warning on, built, and it didn't trigger anywhere so it was dead
code and I just deleted it. But holy crap you *cannot* do that.
inline void mkString(const Symbol & s)
{
mkString(((const std::string &) s).c_str());
}
- Some wrong lints. Lots of wrong macro lints, one wrong
suspicious-sizeof lint triggered by the template being instantiated
with only pointers, but the calculation being correct for both
pointers and not-pointers.
- Exceptions in destructors strike again. I tried to catch the
exceptions that might actually happen rather than all the exceptions
imaginable. We can let the runtime hard-kill it on other exceptions
imo.
Change-Id: I71761620846cba64d66ee7ca231b20c061e69710
There have been multiple setting types for paths that are supposed to be
canonicalised, depending on whether zero or one, one, or any number of paths is
to be specified. Naturally, they behaved in slightly different ways in the
code. Simplify things by unifying them and removing special behaviour (mainly
the "multiple paths type can coerce to boolean" thing).
Change-Id: I7c1ce95e9c8e1829a866fb37d679e167811e9705
They are like experimental features, but opt-in instead of opt-out. They
will allow us to gracefully remove language features. See #437
Change-Id: I9ca04cc48e6926750c4d622c2b229b25cc142c42
* changes:
sqlite: add a Use::fromStrNullable
util: implement charptr_cast
tree-wide: fix a pile of lints
refactor: make HashType and Base enum classes for type safety
build: integrate clang-tidy into CI
This:
- Converts a bunch of C style casts into C++ casts.
- Removes some very silly pointer subtraction code (which is no more or
less busted on i686 than it began)
- Fixes some "technically UB" that never had to be UB in the first
place.
- Makes finally follow the noexcept status of the inner function. Maybe
in the future we should ban the function from not being noexcept, but
that is not today.
- Makes various locally-used exceptions inherit from std::exception.
Change-Id: I22e66972602604989b5e494fd940b93e0e6e9297
The |> operator is a reverse function operator with low binding strength
to replace lib.pipe. Implements RFC 148, see the RFC text for more
details. Closes#438.
Change-Id: I21df66e8014e0d4dd9753dd038560a2b0b7fd805
Currently, the parser relies on the global experimental feature flags.
In order to properly test conditional language features, we instead need
to pass it around in the parser::State.
This means that the parser cannot cache the result of isEnabled anymore,
which wouldn't necessarily hurt performance if the function didn't
perform a linear search on the list of enabled features on every single
call. While we could simply evaluate once at the start of parsing and
cache the result in the parser state, the more sustainable solution
would be to fix `isEnabled` such that all callers may profit from the
performance improvement.
Change-Id: Ic9b9c5d882b6270e1114988b63e6064d36c25cf2
What if you could find memory bugs in Lix without really trying very
hard? I've had variously scuffed patches to do this, but this is
blocked on boost coroutines removal at this point tbh.
Change-Id: Id762af076aa06ad51e77a6c17ed10275929ed578
When the configured maximum depth has been reached, attribute sets and lists
are printed with ellipsis to indicate the elision of nested items. Previously,
this happened even in case the structure being printed is empty, so that such
items do not in fact exist. This is confusing, so stop doing it.
Change-Id: I0016970dad3e42625e085dc896e6f476b21226c9
The repeated value detection logic exists so that the occurrence of large
common substructures does not fill up the screen or the computer's memory.
However, empty attribute sets and derivations (when their detection is enabled)
are always cheap to print, and in practice I have observed them to make up a
significant majority of the cases where I was annoyed by the repeated value
detection kicking in. Furthermore, `nix-instantiate --eval` already disables
this logic for empty attribute sets, and empty lists are already exempted
everywhere. For these reasons, always print empty attribute sets and
derivations as what they are.
Change-Id: I5dac8e7739f9d726b76fd0521ec46f38af94463f
this is cursed. deeply and profoundly cursed. under NO CIRCUMSTANCES
must protocol serializer helpers be applied to temporaries! doing so
will inevitably cause dangling references and cause the entire thing
to crash. we need to do this even so to get rid of boost coroutines,
and likewise to encapsulate the serializers we suffer today at least
a little bit to allow a gradual migration to an actual IPC protocol.
(this isn't a problem that's unique to generators. c++ coroutines in
general cannot safely take references to arbitrary temporaries since
c++ does not have a lifetime system that can make this safe. -sigh-)
Change-Id: I2921ba451e04d86798752d140885d3c5cc08e146
This also bans various sneaking of negative numbers from the language
into unsuspecting builtins as was exposed while auditing the
consequences of changing the Nix language integer type to a newtype.
It's unlikely that this change comprehensively ensures correctness when
passing integers out of the Nix language and we should probably add a
checked-narrowing function or something similar, but that's out of scope
for the immediate change.
During the development of this I found a few fun facts about the
language:
- You could overflow integers by converting from unsigned JSON values.
- You could overflow unsigned integers by converting negative numbers
into them when going into Nix config, into fetchTree, and into flake
inputs.
The flake inputs and Nix config cannot actually be tested properly
since they both ban thunks, however, we put in checks anyway because
it's possible these could somehow be used to do such shenanigans some
other way.
Note that Lix has banned Nix language integer overflows since the very
first public beta, but threw a SIGILL about them because we run with
-fsanitize=signed-overflow -fsanitize-undefined-trap-on-error in
production builds. Since the Nix language uses signed integers, overflow
was simply undefined behaviour, and since we defined that to trap, it
did.
Trapping on it was a bad UX, but we didn't even entirely notice
that we had done this at all until it was reported as a bug a couple of
months later (which is, to be fair, that flag working as intended), and
it's got enough production time that, aside from code that is IMHO buggy
(and which is, in any case, not in nixpkgs) such as
lix-project/lix#445, we don't think
anyone doing anything reasonable actually depends on wrapping overflow.
Even for weird use cases such as doing funny bit crimes, it doesn't make
sense IMO to have wrapping behaviour, since two's complement arithmetic
overflow behaviour is so *aggressively* not what you want for *any* kind
of mathematics/algorithms. The Nix language exists for package
management, a domain where bit crimes are already only dubiously in
scope to begin with, and it makes a lot more sense for that domain for
the integers to never lose precision, either by throwing errors if they
would, or by being arbitrary-precision.
This change will be ported to CppNix as well, to maintain language
consistency.
Fixes: lix-project/lix#423
Change-Id: I51f253840c4af2ea5422b8a420aa5fafbf8fae75
the rewriting sink was just broken. when given a rewrite set that
contained a key that is also a proper infix of another key it was
possible to produce an incorrectly rewritten result if the writer
used the wrong block size. fixing this duplicates rewriteStrings,
to avoid this we'll rewrite rewriteStrings to use RewritingSource
in a new mode that'll allow rewrites we had previously forbidden.
Change-Id: I57fa0a9a994e654e11d07172b8e31d15f0b7e8c0
generators are a better basis for serializers than streaming into sinks
as we do currently for many reasons, such as being usable as sources if
one wishes to (without requiring an intermediate sink to serialize full
data sets into memory, or boost coroutines to turn sinks into sources),
composing more naturally (as one can just yield a sub-generator instead
of being forced to wrap entire substreams into clunky functions or even
more clunky custom types to implement operator<< on), allowing wrappers
to transform data with clear ownership semantics (removing the need for
explicit memory allocations and Source wrappers), and many other things
Change-Id: I361d89ff556354f6930d9204f55117565f2f7f20
this will be the basis of non-boost coroutines in lix. anything that is
a boost coroutine *should* be representable with a Generator coroutine,
and many things that are not currently boost coroutines but behave much
like one (such as, notably, serializers) should be as well. this allows
us to greatly simplify many things that look like iteration but aren't.
Change-Id: I2cebcefa0148b631fb30df4c8cfa92167a407e34
Previously, the progress bar had two subtly different states in which the bar
would not actually render, both with their own shortcomings: inactive (which
was irreversible) and paused (reversible, but swallowing logs). Furthermore,
there was no way of resetting the statistics, so a very bad solution was
implemented (243c0f18da) that would create a new
logger for each line of the repl, leaking the previous one and discarding the
value of printBuildLogs. Finally, if stderr was not attached to a TTY, the
update thread was started even though the logger was not active, violating the
invariant required by the destructor (which is not observed because the logger
is leaked).
In this commit, the two aforementioned states are unified into a single one,
which can be exited again, correctly upholds the invariant that the update
thread is only running while the progress bar is active, and does not swallow
logs. The latter change in behavior is not expected to be a problems in the
rare cases where the paused state was used before, since other loggers (like
the simple one) don't exhibit it anyway. The startProgressBar/stopProgressBar
API is removed due to being a footgun, and a new method for properly resetting
the progress is added.
Co-Authored-By: Qyriad <qyriad@qyriad.me>
Change-Id: I2b7c3eb17d439cd0c16f7b896cfb61239ac7ff3a
If we've consumed the entire input, that doesn't actually mean we're
done decompressing - there might be more output left. This worked (?)
in most cases because the input and output sizes are pretty comparable,
but sometimes they're not and then things get very funny.
Change-Id: I73435a654a911b8ce25119f713b80706c5783c1b
without this we will not be able to get rid of makeDecompressionSink,
which in turn will be necessary to get rid of sourceToSink (since the
libarchive archive wrapper *must* be a Source due to api limitations)
Change-Id: Iccd3d333ba2cbcab49cb5a1d3125624de16bce27
even the transfer function is not all that necessary since there aren't
that many users, but we'll keep it for now. we could've kept both names
but we also kind of want to use `download` for something else very soon
Change-Id: I005e403ee59de433e139e37aa2045c26a523ccbf
* changes:
libstore client: remove remaining dead code
libstore: refuse to serialise ancient protocols
libstore client: remove support for <2.3 clients
libstore daemon: remove very old protocol support (<2.3)
Delete old ValidPathInfo test, fix UnkeyedValidPathInfo
Set up minimum protocol version
with the prepatory work done this mostly means turning plain pointers
into unique_ptrs, with all the associated churn that necessitates. we
might want to change some of these to box_ptrs at some point as well,
but that would be a semantic change that isn't fully appropriate yet.
Change-Id: I0c238c118617420650432f4ed45569baa3e3f413
almost all places where Exprs are passed as pointers expect the pointers
to be non-null. pass them as references to encode this constraint in the
type system as well (and also communicate that Exprs must not be freed).
Change-Id: Ia98f166fec3c23151f906e13acb4a0954a5980a2
The UnkeyedValidPathInfo test was testing an ancient version but not the
current version. Doesn't make much sense to me.
Change-Id: Ib476a4297d9075f2dcd31a073b3e7b149b2189af
The libcmd unit test creates files (more specifically, the fetcher cache) in
its home directory. In the single-user sandbox, this leads to the creation of
/homeless-shelter, since this is the default HOME and the root is writable.
Unfortunately, this conflicts with the assumption of the functional tests that
this directory does not exist. Use a different home directory to prevent these
test failures, and thus restore the ability to build inside the single-user
sandbox.
Fixes: lix-project/lix#365
Change-Id: I4df8c53d043234b95a7c0ac45fc5ee89e8d46aff
They are enabled by default, and Meson will also prints whether or not
they're enabled at the bottom at the end of configuration.
Change-Id: I48db238510bf9e74340b86f243f4bbe360794281
This causes libstore, libexpr, libfetchers, and libutil to be linked
with -Wl,--whole-archive to executables, when building statically.
libstore for the store backends, libexpr for the primops, libfetchers
for the fetcher backends I assume(?), and libutil for the nix::logger
initializer (which notably shows in pre-main constructors when HOME is
not owned by the user. cursed.).
This workaround should be removed when #359 is fixed.
Fixes#306.
Change-Id: Ie9ef0154e09a6ed97920ee8ab23810ca5e2de84c
It seems like someone implemented precompiled headers a long time ago
and then it never got ported to meson or maybe didn't work at all.
This is, however, blessedly easy to simply implement. I went looking for
`#define` that could affect the result of precompiling the headers, and
as far as I can tell we aren't doing any of that, so this should truly
just be free build time savings.
Previous state:
Compilation (551 times):
Parsing (frontend): 1302.1 s
Codegen & opts (backend): 956.3 s
New state:
**** Time summary:
Compilation (567 times):
Parsing (frontend): 1123.0 s
Codegen & opts (backend): 1078.1 s
I wonder if the "regression" in codegen time is just doing the PCH
operation a few times, because meson does it per-target.
Change-Id: I664366b8069bab4851308b3a7571bea97ac64022
This reverts commit 285bc67318.
Reason for revert: lix-project/lix#364
For some reason this broke `main` even though the change we are reverting passed CI! Mysterious, haunted, etc. Needs more debugging, let's turn it off for now.
Change-Id: Ica4819d61cd35b83eb52985bfcb657e858f025a9
* changes:
util.hh: Delete remaining file and clean up headers
util.hh: Move nativeSystem to local-derivation-goal.cc
util.hh: Move stuff to types.hh
util.cc: Delete remaining file
util.{hh,cc}: Move ignoreException to error.{hh,cc}
util.{hh,cc}: Split out namespaces.{hh,cc}
util.{hh,cc}: Split out users.{hh,cc}
util.{hh,cc}: Split out strings.{hh,cc}
util.{hh,cc}: Split out unix-domain-socket.{hh,cc}
util.{hh,cc}: Split out child.{hh,cc}
util.{hh,cc}: Split out current-process.{hh,cc}
util.{hh,cc}: Split out processes.{hh,cc}
util.{hh,cc}: Split out file-descriptor.{hh,cc}
util.{hh,cc}: Split out file-system.{hh,cc}
util.{hh,cc}: Split out terminal.{hh,cc}
util.{hh,cc}: Split out environment-variables.{hh,cc}
while refactoring the curl wrapper we inadvertently broken the immutable
flake protocol, because the immutable flake protocol accumulates headers
across the entire redirect chain instead of using only the headers given
in the final response of the chain. this is a problem because Some Known
Providers Of Flake Infrastructure set rel=immutable link headers only in
the penultimate entry of the redirect chain, and curl does not regard it
as worth returning to us via its response header enumeration mechanisms.
fixes lix-project/lix#358
Change-Id: I645c3932b465cde848bd6a3565925a1e3cbcdda0