Compare commits

...

11 commits

Author SHA1 Message Date
Zebreus
74589afdc9 Fix tests on systems with a non-master git defaultBranch
Change-Id: Ie73bbed1db9419c9885b9d57e4edb7a4047d5cce
2024-10-09 22:19:35 +02:00
0012887310 Merge "Add release note for CTRL-C improvements" into main 2024-10-08 22:15:56 +00:00
4ea8c9d643 Set c++ version to c++23
I followed @pennae's advice and moved the constructor definition of
`AttrName` from the header file `nixexpr.hh` to `nixexpr.cc`.

Change-Id: I733f56c25635b366b11ba332ccec38dd7444e793
2024-10-08 20:05:28 +02:00
43e79f4434 Fix gcc warning -Wmissing-field-initializers
The approach that was taken here was to add default values to the type
definitions rather than specify them whenever they are missing.

Now the only remaining warning is '-Wunused-parameter' which @jade said
is usually counterproductive and that we can just disable it:
lix-project/lix#456 (comment)

So this change adds the flags '-Wall', '-Wextra' and
'-Wno-unused-parameter', so that all warnings are enabled except for
'-Wunused-parameter'.

Change-Id: Ic223a964d67ab429e8da804c0721ba5e25d53012
2024-10-08 01:44:38 +00:00
299813f324 Merge "Avoid calling memcpy when len == 0 in filetransfer.cc" into main 2024-10-08 01:41:41 +00:00
d6e1b11d3e Fix gcc warning -Wsign-compare
Add the compile flag '-Wsign-compare' and adapt the code to fix all
cases of this warning.

Change-Id: I26b08fa5a03e4ac294daf697d32cf9140d84350d
2024-10-08 01:32:12 +02:00
51a5025913 Avoid calling memcpy when len == 0 in filetransfer.cc
There was a bug report about a potential call to `memcpy` with a null
pointer which is not reproducible:
lix-project/lix#492

This occurred in `src/libstore/filetransfer.cc` in `InnerSource::read`.

To ensure that this doesn't happen, an early return is added before
calling `memcpy` if the length of the data to be copied is 0.

This change also adds a test that ensures that when `InnerSource::read`
is called with an empty file, it throws an `EndOfFile` exception.

Change-Id: Ia18149bee9a3488576c864f28475a3a0c9eadfbb
2024-10-08 01:26:30 +02:00
ed9b7f4f84 libstore: remove Worker::{childStarted, goalFinished}
these two functions are now nearly trivial and much better inline into
makeGoalCommon. keeping them separate also separates information about
goal completion flows and how failure information ends up in `Worker`.

Change-Id: I6af86996e4a2346583371186595e3013c88fb082
2024-10-05 21:19:51 +00:00
649d8cd08f libstore: remove Worker::removeGoal
we can use our newfound powers of Goal::work Is A Real Promise to remove
completed goals from continuation promises. apart from being much easier
to follow it's also a lot more efficient because we have the iterator to
the item we are trying to remove, skipping a linear search of the cache.

Change-Id: Ie0190d051c5f4b81304d98db478348b20c209df5
2024-10-05 21:19:51 +00:00
9adf6f4568 libstore: remove Goal::notify
Goal::work() is a fully usable promise that does not rely on the worker
to report completion conditions. as such we no longer need the `notify`
field that enabled this interplay. we do have to clear goal caches when
destroying the worker though, otherwise goal promises may (incorrectly)
keep goals alive due to strong shared pointers created by childStarted.

Change-Id: Ie607209aafec064dbdf3464fe207d70ba9ee158a
2024-10-05 21:19:51 +00:00
0d484aa498
Add release note for CTRL-C improvements
I'm very excited for cl/2016, so others will probably be excited also!
Let's add a release note.

Change-Id: Ic84a4444241aafce4cb6d5a6d1dddb47e7a7dd7b
2024-10-05 10:40:51 -07:00
23 changed files with 118 additions and 132 deletions

View file

@ -0,0 +1,13 @@
---
synopsis: Ctrl-C stops Nix commands much more reliably and responsively
issues: [7245, fj#393]
cls: [2016]
prs: [11618]
category: Fixes
credits: [roberth, 9999years]
---
CTRL-C will now stop Nix commands much more reliably and responsively. While
there are still some cases where a Nix command can be slow or unresponsive
following a `SIGINT` (please report these as issues!), the vast majority of
signals will now cause the Nix command to quit quickly and consistently.

View file

@ -50,10 +50,9 @@ project('lix', 'cpp', 'rust',
meson_version : '>=1.4.0', meson_version : '>=1.4.0',
version : run_command('bash', '-c', 'echo -n $(jq -r .version < ./version.json)$VERSION_SUFFIX', check : true).stdout().strip(), version : run_command('bash', '-c', 'echo -n $(jq -r .version < ./version.json)$VERSION_SUFFIX', check : true).stdout().strip(),
default_options : [ default_options : [
'cpp_std=c++2a', 'cpp_std=c++23',
'rust_std=2021', 'rust_std=2021',
# TODO(Qyriad): increase the warning level 'warning_level=2',
'warning_level=1',
'debug=true', 'debug=true',
'optimization=2', 'optimization=2',
'errorlogs=true', # Please print logs for tests that fail 'errorlogs=true', # Please print logs for tests that fail
@ -485,6 +484,7 @@ add_project_arguments(
# TODO(Qyriad): Yes this is how the autoconf+Make system did it. # TODO(Qyriad): Yes this is how the autoconf+Make system did it.
# It would be nice for our headers to be idempotent instead. # It would be nice for our headers to be idempotent instead.
'-include', 'config.h', '-include', 'config.h',
'-Wno-unused-parameter',
'-Wno-deprecated-declarations', '-Wno-deprecated-declarations',
'-Wimplicit-fallthrough', '-Wimplicit-fallthrough',
'-Werror=switch', '-Werror=switch',

View file

@ -21,6 +21,14 @@ std::ostream & operator <<(std::ostream & str, const SymbolStr & symbol)
return printIdentifier(str, s); return printIdentifier(str, s);
} }
AttrName::AttrName(Symbol s) : symbol(s)
{
}
AttrName::AttrName(std::unique_ptr<Expr> e) : expr(std::move(e))
{
}
void Expr::show(const SymbolTable & symbols, std::ostream & str) const void Expr::show(const SymbolTable & symbols, std::ostream & str) const
{ {
abort(); abort();

View file

@ -30,8 +30,8 @@ struct AttrName
{ {
Symbol symbol; Symbol symbol;
std::unique_ptr<Expr> expr; std::unique_ptr<Expr> expr;
AttrName(Symbol s) : symbol(s) {}; AttrName(Symbol s);
AttrName(std::unique_ptr<Expr> e) : expr(std::move(e)) {}; AttrName(std::unique_ptr<Expr> e);
}; };
typedef std::vector<AttrName> AttrPath; typedef std::vector<AttrName> AttrPath;

View file

@ -9,7 +9,7 @@ namespace nix::parser {
struct StringToken struct StringToken
{ {
std::string_view s; std::string_view s;
bool hasIndentation; bool hasIndentation = false;
operator std::string_view() const { return s; } operator std::string_view() const { return s; }
}; };

View file

@ -47,7 +47,7 @@ struct BuildResult
* @todo This should be an entire ErrorInfo object, not just a * @todo This should be an entire ErrorInfo object, not just a
* string, for richer information. * string, for richer information.
*/ */
std::string errorMsg; std::string errorMsg = {};
std::string toString() const { std::string toString() const {
auto strStatus = [&]() { auto strStatus = [&]() {
@ -90,7 +90,7 @@ struct BuildResult
* For derivations, a mapping from the names of the wanted outputs * For derivations, a mapping from the names of the wanted outputs
* to actual paths. * to actual paths.
*/ */
SingleDrvOutputs builtOutputs; SingleDrvOutputs builtOutputs = {};
/** /**
* The start/stop times of the build (or one of the rounds, if it * The start/stop times of the build (or one of the rounds, if it

View file

@ -63,7 +63,7 @@ struct InitialOutputStatus {
struct InitialOutput { struct InitialOutput {
bool wanted; bool wanted;
Hash outputHash; Hash outputHash;
std::optional<InitialOutputStatus> known; std::optional<InitialOutputStatus> known = {};
}; };
/** /**

View file

@ -26,7 +26,6 @@ try {
trace("done"); trace("done");
notify->fulfill(result);
cleanup(); cleanup();
co_return std::move(result); co_return std::move(result);

View file

@ -82,19 +82,14 @@ struct Goal
*/ */
std::string name; std::string name;
struct WorkResult;
// for use by Worker and Goal only. will go away once work() is a promise.
kj::Own<kj::PromiseFulfiller<Result<WorkResult>>> notify;
protected: protected:
AsyncSemaphore::Token slotToken; AsyncSemaphore::Token slotToken;
public: public:
struct [[nodiscard]] WorkResult { struct [[nodiscard]] WorkResult {
ExitCode exitCode; ExitCode exitCode;
BuildResult result; BuildResult result = {};
std::shared_ptr<Error> ex; std::shared_ptr<Error> ex = {};
bool permanentFailure = false; bool permanentFailure = false;
bool timedOut = false; bool timedOut = false;
bool hashMismatch = false; bool hashMismatch = false;

View file

@ -48,6 +48,10 @@ Worker::~Worker()
their destructors). */ their destructors). */
children.clear(); children.clear();
derivationGoals.clear();
drvOutputSubstitutionGoals.clear();
substitutionGoals.clear();
assert(expectedSubstitutions == 0); assert(expectedSubstitutions == 0);
assert(expectedDownloadSize == 0); assert(expectedDownloadSize == 0);
assert(expectedNarSize == 0); assert(expectedNarSize == 0);
@ -67,25 +71,45 @@ std::pair<std::shared_ptr<G>, kj::Promise<Result<Goal::WorkResult>>> Worker::mak
// and then we only want to recreate the goal *once*. concurrent accesses // and then we only want to recreate the goal *once*. concurrent accesses
// to the worker are not sound, we want to catch them if at all possible. // to the worker are not sound, we want to catch them if at all possible.
for ([[maybe_unused]] auto _attempt : {1, 2}) { for ([[maybe_unused]] auto _attempt : {1, 2}) {
auto & goal_weak = it->second; auto & cachedGoal = it->second;
auto goal = goal_weak.goal.lock(); auto & goal = cachedGoal.goal;
if (!goal) { if (!goal) {
goal = create(); goal = create();
goal->notify = std::move(goal_weak.fulfiller);
goal_weak.goal = goal;
// do not start working immediately. if we are not yet running we // do not start working immediately. if we are not yet running we
// may create dependencies as though they were toplevel goals, in // may create dependencies as though they were toplevel goals, in
// which case the dependencies will not report build errors. when // which case the dependencies will not report build errors. when
// we are running we may be called for this same goal more times, // we are running we may be called for this same goal more times,
// and then we want to modify rather than recreate when possible. // and then we want to modify rather than recreate when possible.
childStarted(goal, kj::evalLater([goal] { return goal->work(); })); auto removeWhenDone = [goal, &map, it] {
// c++ lambda coroutine capture semantics are *so* fucked up.
return [](auto goal, auto & map, auto it) -> kj::Promise<Result<Goal::WorkResult>> {
auto result = co_await goal->work();
// a concurrent call to makeGoalCommon may have reset our
// cached goal and replaced it with a new instance. don't
// remove the goal in this case, otherwise we will crash.
if (goal == it->second.goal) {
map.erase(it);
}
co_return result;
}(goal, map, it);
};
cachedGoal.promise = kj::evalLater(std::move(removeWhenDone)).fork();
children.add(cachedGoal.promise.addBranch().then([this](auto _result) {
if (_result.has_value()) {
auto & result = _result.value();
permanentFailure |= result.permanentFailure;
timedOut |= result.timedOut;
hashMismatch |= result.hashMismatch;
checkMismatch |= result.checkMismatch;
}
}));
} else { } else {
if (!modify(*goal)) { if (!modify(*goal)) {
goal_weak = {}; cachedGoal = {};
continue; continue;
} }
} }
return {goal, goal_weak.promise->addBranch()}; return {goal, cachedGoal.promise.addBranch()};
} }
assert(false && "could not make a goal. possible concurrent worker access"); assert(false && "could not make a goal. possible concurrent worker access");
} }
@ -179,58 +203,6 @@ std::pair<GoalPtr, kj::Promise<Result<Goal::WorkResult>>> Worker::makeGoal(const
}, req.raw()); }, req.raw());
} }
template<typename G>
static void removeGoal(std::shared_ptr<G> goal, auto & goalMap)
{
/* !!! inefficient */
for (auto i = goalMap.begin();
i != goalMap.end(); )
if (i->second.goal.lock() == goal) {
auto j = i; ++j;
goalMap.erase(i);
i = j;
}
else ++i;
}
void Worker::goalFinished(GoalPtr goal, Goal::WorkResult & f)
{
permanentFailure |= f.permanentFailure;
timedOut |= f.timedOut;
hashMismatch |= f.hashMismatch;
checkMismatch |= f.checkMismatch;
removeGoal(goal);
}
void Worker::removeGoal(GoalPtr goal)
{
if (auto drvGoal = std::dynamic_pointer_cast<DerivationGoal>(goal))
nix::removeGoal(drvGoal, derivationGoals);
else if (auto subGoal = std::dynamic_pointer_cast<PathSubstitutionGoal>(goal))
nix::removeGoal(subGoal, substitutionGoals);
else if (auto subGoal = std::dynamic_pointer_cast<DrvOutputSubstitutionGoal>(goal))
nix::removeGoal(subGoal, drvOutputSubstitutionGoals);
else
assert(false);
}
void Worker::childStarted(GoalPtr goal, kj::Promise<Result<Goal::WorkResult>> promise)
{
children.add(promise
.then([this, goal](auto result) {
if (result.has_value()) {
goalFinished(goal, result.assume_value());
} else {
goal->notify->fulfill(result.assume_error());
}
}));
}
kj::Promise<Result<Worker::Results>> Worker::updateStatistics() kj::Promise<Result<Worker::Results>> Worker::updateStatistics()
try { try {
while (true) { while (true) {
@ -275,7 +247,7 @@ Worker::Results Worker::run(std::function<Targets (GoalFactory &)> req)
.exclusiveJoin(std::move(onInterrupt.promise)); .exclusiveJoin(std::move(onInterrupt.promise));
// TODO GC interface? // TODO GC interface?
if (auto localStore = dynamic_cast<LocalStore *>(&store); localStore && settings.minFree != 0) { if (auto localStore = dynamic_cast<LocalStore *>(&store); localStore && settings.minFree != 0u) {
// Periodically wake up to see if we need to run the garbage collector. // Periodically wake up to see if we need to run the garbage collector.
promise = promise.exclusiveJoin(boopGC(*localStore)); promise = promise.exclusiveJoin(boopGC(*localStore));
} }

View file

@ -95,16 +95,8 @@ private:
template<typename G> template<typename G>
struct CachedGoal struct CachedGoal
{ {
std::weak_ptr<G> goal; std::shared_ptr<G> goal;
kj::Own<kj::ForkedPromise<Result<Goal::WorkResult>>> promise; kj::ForkedPromise<Result<Goal::WorkResult>> promise{nullptr};
kj::Own<kj::PromiseFulfiller<Result<Goal::WorkResult>>> fulfiller;
CachedGoal()
{
auto pf = kj::newPromiseAndFulfiller<Result<Goal::WorkResult>>();
promise = kj::heap(pf.promise.fork());
fulfiller = std::move(pf.fulfiller);
}
}; };
/** /**
* Maps used to prevent multiple instantiations of a goal for the * Maps used to prevent multiple instantiations of a goal for the
@ -140,18 +132,6 @@ private:
*/ */
bool checkMismatch = false; bool checkMismatch = false;
void goalFinished(GoalPtr goal, Goal::WorkResult & f);
/**
* Remove a dead goal.
*/
void removeGoal(GoalPtr goal);
/**
* Registers a running child process.
*/
void childStarted(GoalPtr goal, kj::Promise<Result<Goal::WorkResult>> promise);
/** /**
* Pass current stats counters to the logger for progress bar updates. * Pass current stats counters to the logger for progress bar updates.
*/ */

View file

@ -6,6 +6,7 @@
#include "signals.hh" #include "signals.hh"
#include "compression.hh" #include "compression.hh"
#include "strings.hh" #include "strings.hh"
#include <cstddef>
#if ENABLE_S3 #if ENABLE_S3
#include <aws/core/client/ClientConfiguration.h> #include <aws/core/client/ClientConfiguration.h>
@ -784,8 +785,10 @@ struct curlFileTransfer : public FileTransfer
size_t read(char * data, size_t len) override size_t read(char * data, size_t len) override
{ {
auto readPartial = [this](char * data, size_t len) { auto readPartial = [this](char * data, size_t len) -> size_t {
const auto available = std::min(len, buffered.size()); const auto available = std::min(len, buffered.size());
if (available == 0u) return 0u;
memcpy(data, buffered.data(), available); memcpy(data, buffered.data(), available);
buffered.remove_prefix(available); buffered.remove_prefix(available);
return available; return available;

View file

@ -20,10 +20,10 @@ struct NarMember
file in the NAR. */ file in the NAR. */
uint64_t start = 0, size = 0; uint64_t start = 0, size = 0;
std::string target; std::string target = {};
/* If this is a directory, all the children of the directory. */ /* If this is a directory, all the children of the directory. */
std::map<std::string, NarMember> children; std::map<std::string, NarMember> children = {};
}; };
struct NarAccessor : public FSAccessor struct NarAccessor : public FSAccessor

View file

@ -17,7 +17,7 @@ namespace nix {
struct StorePathWithOutputs struct StorePathWithOutputs
{ {
StorePath path; StorePath path;
std::set<std::string> outputs; std::set<std::string> outputs = {};
std::string to_string(const Store & store) const; std::string to_string(const Store & store) const;

View file

@ -50,7 +50,7 @@ struct Realisation {
DrvOutput id; DrvOutput id;
StorePath outPath; StorePath outPath;
StringSet signatures; StringSet signatures = {};
/** /**
* The realisations that are required for the current one to be valid. * The realisations that are required for the current one to be valid.
@ -58,7 +58,7 @@ struct Realisation {
* When importing this realisation, the store will first check that all its * When importing this realisation, the store will first check that all its
* dependencies exist, and map to the correct output path * dependencies exist, and map to the correct output path
*/ */
std::map<DrvOutput, StorePath> dependentRealisations; std::map<DrvOutput, StorePath> dependentRealisations = {};
nlohmann::json toJSON() const; nlohmann::json toJSON() const;
static Realisation fromJSON(const nlohmann::json& json, const std::string& whence); static Realisation fromJSON(const nlohmann::json& json, const std::string& whence);

View file

@ -829,7 +829,7 @@ StorePathSet Store::queryValidPaths(const StorePathSet & paths, SubstituteFlag m
{ {
size_t left; size_t left;
StorePathSet valid; StorePathSet valid;
std::exception_ptr exc; std::exception_ptr exc = {};
}; };
Sync<State> state_(State{paths.size(), StorePathSet()}); Sync<State> state_(State{paths.size(), StorePathSet()});

View file

@ -70,17 +70,17 @@ inline bool operator<=(const Trace& lhs, const Trace& rhs);
inline bool operator>=(const Trace& lhs, const Trace& rhs); inline bool operator>=(const Trace& lhs, const Trace& rhs);
struct ErrorInfo { struct ErrorInfo {
Verbosity level; Verbosity level = Verbosity::lvlError;
HintFmt msg; HintFmt msg;
std::shared_ptr<Pos> pos; std::shared_ptr<Pos> pos;
std::list<Trace> traces; std::list<Trace> traces = {};
/** /**
* Exit status. * Exit status.
*/ */
unsigned int status = 1; unsigned int status = 1;
Suggestions suggestions; Suggestions suggestions = {};
static std::optional<std::string> programName; static std::optional<std::string> programName;
}; };

View file

@ -78,11 +78,11 @@ struct RunOptions
{ {
Path program; Path program;
bool searchPath = true; bool searchPath = true;
Strings args; Strings args = {};
std::optional<uid_t> uid; std::optional<uid_t> uid = {};
std::optional<uid_t> gid; std::optional<uid_t> gid = {};
std::optional<Path> chdir; std::optional<Path> chdir = {};
std::optional<std::map<std::string, std::string>> environment; std::optional<std::map<std::string, std::string>> environment = {};
bool captureStdout = false; bool captureStdout = false;
bool mergeStderrToStdout = false; bool mergeStderrToStdout = false;
bool isInteractive = false; bool isInteractive = false;

View file

@ -8,6 +8,8 @@ clearStore
# See https://github.com/NixOS/nix/issues/6195 # See https://github.com/NixOS/nix/issues/6195
repo=$TEST_ROOT/./git repo=$TEST_ROOT/./git
default_branch="$(git config init.defaultBranch)"
export _NIX_FORCE_HTTP=1 export _NIX_FORCE_HTTP=1
rm -rf $repo ${repo}-tmp $TEST_HOME/.cache/nix $TEST_ROOT/worktree $TEST_ROOT/shallow $TEST_ROOT/minimal rm -rf $repo ${repo}-tmp $TEST_HOME/.cache/nix $TEST_ROOT/worktree $TEST_ROOT/shallow $TEST_ROOT/minimal
@ -47,7 +49,7 @@ git -C $repo checkout -b devtest
echo "different file" >> $TEST_ROOT/git/differentbranch echo "different file" >> $TEST_ROOT/git/differentbranch
git -C $repo add differentbranch git -C $repo add differentbranch
git -C $repo commit -m 'Test2' git -C $repo commit -m 'Test2'
git -C $repo checkout master git -C $repo checkout "$default_branch"
devrev=$(git -C $repo rev-parse devtest) devrev=$(git -C $repo rev-parse devtest)
out=$(nix eval --impure --raw --expr "builtins.fetchGit { url = \"file://$repo\"; rev = \"$devrev\"; }" 2>&1) || status=$? out=$(nix eval --impure --raw --expr "builtins.fetchGit { url = \"file://$repo\"; rev = \"$devrev\"; }" 2>&1) || status=$?
[[ $status == 1 ]] [[ $status == 1 ]]
@ -118,7 +120,7 @@ path2=$(nix eval --impure --raw --expr "(builtins.fetchGit $repo).outPath")
[[ $(nix eval --impure --raw --expr "(builtins.fetchGit $repo).dirtyShortRev") = "${rev2:0:7}-dirty" ]] [[ $(nix eval --impure --raw --expr "(builtins.fetchGit $repo).dirtyShortRev") = "${rev2:0:7}-dirty" ]]
# ... unless we're using an explicit ref or rev. # ... unless we're using an explicit ref or rev.
path3=$(nix eval --impure --raw --expr "(builtins.fetchGit { url = $repo; ref = \"master\"; }).outPath") path3=$(nix eval --impure --raw --expr "(builtins.fetchGit { url = $repo; ref = \"$default_branch\"; }).outPath")
[[ $path = $path3 ]] [[ $path = $path3 ]]
path3=$(nix eval --raw --expr "(builtins.fetchGit { url = $repo; rev = \"$rev2\"; }).outPath") path3=$(nix eval --raw --expr "(builtins.fetchGit { url = $repo; rev = \"$rev2\"; }).outPath")

View file

@ -6,6 +6,8 @@ clearStore
repo="$TEST_ROOT/git" repo="$TEST_ROOT/git"
default_branch="$(git config init.defaultBranch)"
rm -rf "$repo" "${repo}-tmp" "$TEST_HOME/.cache/nix" rm -rf "$repo" "${repo}-tmp" "$TEST_HOME/.cache/nix"
git init "$repo" git init "$repo"
@ -16,7 +18,7 @@ echo utrecht > "$repo"/hello
git -C "$repo" add hello git -C "$repo" add hello
git -C "$repo" commit -m 'Bla1' git -C "$repo" commit -m 'Bla1'
path=$(nix eval --raw --impure --expr "(builtins.fetchGit { url = $repo; ref = \"master\"; }).outPath") path=$(nix eval --raw --impure --expr "(builtins.fetchGit { url = $repo; ref = \"$default_branch\"; }).outPath")
# Test various combinations of ref names # Test various combinations of ref names
# (taken from the git project) # (taken from the git project)
@ -38,7 +40,7 @@ path=$(nix eval --raw --impure --expr "(builtins.fetchGit { url = $repo; ref = \
valid_ref() { valid_ref() {
{ set +x; printf >&2 '\n>>>>>>>>>> valid_ref %s\b <<<<<<<<<<\n' $(printf %s "$1" | sed -n -e l); set -x; } { set +x; printf >&2 '\n>>>>>>>>>> valid_ref %s\b <<<<<<<<<<\n' $(printf %s "$1" | sed -n -e l); set -x; }
git check-ref-format --branch "$1" >/dev/null git check-ref-format --branch "$1" >/dev/null
git -C "$repo" branch "$1" master >/dev/null git -C "$repo" branch "$1" "$default_branch" >/dev/null
path1=$(nix eval --raw --impure --expr "(builtins.fetchGit { url = $repo; ref = ''$1''; }).outPath") path1=$(nix eval --raw --impure --expr "(builtins.fetchGit { url = $repo; ref = ''$1''; }).outPath")
[[ $path1 = $path ]] [[ $path1 = $path ]]
git -C "$repo" branch -D "$1" >/dev/null git -C "$repo" branch -D "$1" >/dev/null

View file

@ -3,6 +3,9 @@ source ./common.sh
requireGit requireGit
clearStore clearStore
default_branch="$(git config init.defaultBranch)"
rm -rf $TEST_HOME/.cache $TEST_HOME/.config rm -rf $TEST_HOME/.cache $TEST_HOME/.config
flake1Dir=$TEST_ROOT/flake1 flake1Dir=$TEST_ROOT/flake1
@ -15,10 +18,10 @@ badFlakeDir=$TEST_ROOT/badFlake
flakeGitBare=$TEST_ROOT/flakeGitBare flakeGitBare=$TEST_ROOT/flakeGitBare
for repo in $flake1Dir $flake2Dir $flake3Dir $flake7Dir $nonFlakeDir; do for repo in $flake1Dir $flake2Dir $flake3Dir $flake7Dir $nonFlakeDir; do
# Give one repo a non-main initial branch. # Give one repo a non-default initial branch.
extraArgs= extraArgs=
if [[ $repo == $flake2Dir ]]; then if [[ $repo == $flake2Dir ]]; then
extraArgs="--initial-branch=main" extraArgs="--initial-branch=notdefault"
fi fi
createGitRepo "$repo" "$extraArgs" createGitRepo "$repo" "$extraArgs"
@ -152,11 +155,11 @@ nix build -o $TEST_ROOT/result $flake2Dir#bar --no-write-lock-file
expect 1 nix build -o $TEST_ROOT/result $flake2Dir#bar --no-update-lock-file 2>&1 | grep 'requires lock file changes' expect 1 nix build -o $TEST_ROOT/result $flake2Dir#bar --no-update-lock-file 2>&1 | grep 'requires lock file changes'
nix build -o $TEST_ROOT/result $flake2Dir#bar --commit-lock-file nix build -o $TEST_ROOT/result $flake2Dir#bar --commit-lock-file
[[ -e $flake2Dir/flake.lock ]] [[ -e $flake2Dir/flake.lock ]]
[[ -z $(git -C $flake2Dir diff main || echo failed) ]] [[ -z $(git -C $flake2Dir diff notdefault || echo failed) ]]
# Rerunning the build should not change the lockfile. # Rerunning the build should not change the lockfile.
nix build -o $TEST_ROOT/result $flake2Dir#bar nix build -o $TEST_ROOT/result $flake2Dir#bar
[[ -z $(git -C $flake2Dir diff main || echo failed) ]] [[ -z $(git -C $flake2Dir diff notdefault || echo failed) ]]
# Building with a lockfile should not require a fetch of the registry. # Building with a lockfile should not require a fetch of the registry.
nix build -o $TEST_ROOT/result --flake-registry file:///no-registry.json $flake2Dir#bar --refresh nix build -o $TEST_ROOT/result --flake-registry file:///no-registry.json $flake2Dir#bar --refresh
@ -165,7 +168,7 @@ nix build -o $TEST_ROOT/result --no-use-registries $flake2Dir#bar --refresh
# Updating the flake should not change the lockfile. # Updating the flake should not change the lockfile.
nix flake lock $flake2Dir nix flake lock $flake2Dir
[[ -z $(git -C $flake2Dir diff main || echo failed) ]] [[ -z $(git -C $flake2Dir diff notdefault || echo failed) ]]
# Now we should be able to build the flake in pure mode. # Now we should be able to build the flake in pure mode.
nix build -o $TEST_ROOT/result flake2#bar nix build -o $TEST_ROOT/result flake2#bar
@ -200,7 +203,7 @@ nix build -o $TEST_ROOT/result $flake3Dir#"sth sth"
nix build -o $TEST_ROOT/result $flake3Dir#"sth%20sth" nix build -o $TEST_ROOT/result $flake3Dir#"sth%20sth"
# Check whether it saved the lockfile # Check whether it saved the lockfile
[[ -n $(git -C $flake3Dir diff master) ]] [[ -n $(git -C $flake3Dir diff "$default_branch") ]]
git -C $flake3Dir add flake.lock git -C $flake3Dir add flake.lock
@ -286,7 +289,7 @@ nix build -o $TEST_ROOT/result $flake3Dir#sth --commit-lock-file
Flake lock file updates: Flake lock file updates:
"?" Added input 'nonFlake': "?" Added input 'nonFlake':
'git+file://"*"/flakes/flakes/nonFlake?ref=refs/heads/master&rev="*"' "*" 'git+file://"*"/flakes/flakes/nonFlake?ref=refs/heads/$default_branch&rev="*"' "*"
"?" Added input 'nonFlakeFile': "?" Added input 'nonFlakeFile':
'path:"*"/flakes/flakes/nonFlake/README.md?lastModified="*"&narHash=sha256-cPh6hp48IOdRxVV3xGd0PDgSxgzj5N/2cK0rMPNaR4o%3D' "*" 'path:"*"/flakes/flakes/nonFlake/README.md?lastModified="*"&narHash=sha256-cPh6hp48IOdRxVV3xGd0PDgSxgzj5N/2cK0rMPNaR4o%3D' "*"
"?" Added input 'nonFlakeFile2': "?" Added input 'nonFlakeFile2':
@ -313,10 +316,10 @@ nix build -o $TEST_ROOT/result flake4#xyzzy
# Test 'nix flake update' and --override-flake. # Test 'nix flake update' and --override-flake.
nix flake lock $flake3Dir nix flake lock $flake3Dir
[[ -z $(git -C $flake3Dir diff master || echo failed) ]] [[ -z $(git -C $flake3Dir diff "$default_branch" || echo failed) ]]
nix flake update --flake "$flake3Dir" --override-flake flake2 nixpkgs nix flake update --flake "$flake3Dir" --override-flake flake2 nixpkgs
[[ ! -z $(git -C "$flake3Dir" diff master || echo failed) ]] [[ ! -z $(git -C "$flake3Dir" diff "$default_branch" || echo failed) ]]
# Make branch "removeXyzzy" where flake3 doesn't have xyzzy anymore # Make branch "removeXyzzy" where flake3 doesn't have xyzzy anymore
git -C $flake3Dir checkout -b removeXyzzy git -C $flake3Dir checkout -b removeXyzzy
@ -350,7 +353,7 @@ EOF
nix flake lock $flake3Dir nix flake lock $flake3Dir
git -C $flake3Dir add flake.nix flake.lock git -C $flake3Dir add flake.nix flake.lock
git -C $flake3Dir commit -m 'Remove packages.xyzzy' git -C $flake3Dir commit -m 'Remove packages.xyzzy'
git -C $flake3Dir checkout master git -C $flake3Dir checkout "$default_branch"
# Test whether fuzzy-matching works for registry entries. # Test whether fuzzy-matching works for registry entries.
(! nix build -o $TEST_ROOT/result flake4/removeXyzzy#xyzzy) (! nix build -o $TEST_ROOT/result flake4/removeXyzzy#xyzzy)
@ -499,7 +502,7 @@ nix flake lock $flake3Dir --override-input flake2/flake1 file://$TEST_ROOT/flake
nix flake lock $flake3Dir --override-input flake2/flake1 flake1 nix flake lock $flake3Dir --override-input flake2/flake1 flake1
[[ $(jq -r .nodes.flake1_2.locked.rev $flake3Dir/flake.lock) =~ $hash2 ]] [[ $(jq -r .nodes.flake1_2.locked.rev $flake3Dir/flake.lock) =~ $hash2 ]]
nix flake lock $flake3Dir --override-input flake2/flake1 flake1/master/$hash1 nix flake lock $flake3Dir --override-input flake2/flake1 "flake1/$default_branch/$hash1"
[[ $(jq -r .nodes.flake1_2.locked.rev $flake3Dir/flake.lock) =~ $hash1 ]] [[ $(jq -r .nodes.flake1_2.locked.rev $flake3Dir/flake.lock) =~ $hash1 ]]
nix flake lock $flake3Dir nix flake lock $flake3Dir
@ -510,8 +513,8 @@ nix flake update flake2/flake1 --flake "$flake3Dir"
[[ $(jq -r .nodes.flake1_2.locked.rev "$flake3Dir/flake.lock") =~ $hash2 ]] [[ $(jq -r .nodes.flake1_2.locked.rev "$flake3Dir/flake.lock") =~ $hash2 ]]
# Test updating multiple inputs. # Test updating multiple inputs.
nix flake lock "$flake3Dir" --override-input flake1 flake1/master/$hash1 nix flake lock "$flake3Dir" --override-input flake1 "flake1/$default_branch/$hash1"
nix flake lock "$flake3Dir" --override-input flake2/flake1 flake1/master/$hash1 nix flake lock "$flake3Dir" --override-input flake2/flake1 "flake1/$default_branch/$hash1"
[[ $(jq -r .nodes.flake1.locked.rev "$flake3Dir/flake.lock") =~ $hash1 ]] [[ $(jq -r .nodes.flake1.locked.rev "$flake3Dir/flake.lock") =~ $hash1 ]]
[[ $(jq -r .nodes.flake1_2.locked.rev "$flake3Dir/flake.lock") =~ $hash1 ]] [[ $(jq -r .nodes.flake1_2.locked.rev "$flake3Dir/flake.lock") =~ $hash1 ]]

View file

@ -150,6 +150,14 @@ TEST(FileTransfer, exceptionAbortsDownload)
} }
} }
TEST(FileTransfer, exceptionAbortsRead)
{
auto [port, srv] = serveHTTP("200 ok", "content-length: 0\r\n", [] { return ""; });
auto ft = makeFileTransfer();
char buf[10] = "";
ASSERT_THROW(ft->download(FileTransferRequest(fmt("http://[::1]:%d/index", port)))->read(buf, 10), EndOfFile);
}
TEST(FileTransfer, NOT_ON_DARWIN(reportsSetupErrors)) TEST(FileTransfer, NOT_ON_DARWIN(reportsSetupErrors))
{ {
auto [port, srv] = serveHTTP("404 not found", "", [] { return ""; }); auto [port, srv] = serveHTTP("404 not found", "", [] { return ""; });

View file

@ -1,4 +1,5 @@
#include "compression.hh" #include "compression.hh"
#include <cstddef>
#include <gtest/gtest.h> #include <gtest/gtest.h>
namespace nix { namespace nix {
@ -147,7 +148,7 @@ TEST_P(PerTypeNonNullCompressionTest, truncatedValidInput)
/* n.b. This also tests zero-length input, which is also invalid. /* n.b. This also tests zero-length input, which is also invalid.
* As of the writing of this comment, it returns empty output, but is * As of the writing of this comment, it returns empty output, but is
* allowed to throw a compression error instead. */ * allowed to throw a compression error instead. */
for (int i = 0; i < compressed.length(); ++i) { for (size_t i = 0u; i < compressed.length(); ++i) {
auto newCompressed = compressed.substr(compressed.length() - i); auto newCompressed = compressed.substr(compressed.length() - i);
try { try {
decompress(method, newCompressed); decompress(method, newCompressed);