Compare commits

...

12 commits

Author SHA1 Message Date
Maximilian Bosch 07bf4fcd61
Allow specifying jobset inputs for flake builds for Hydra plugins
It seems to be a common pattern to configure Hydra plugins on a
per-jobset basis with jobset inputs. Since these are not needed anymore
for flakes, it's also not possible anymore to use plugins for flake
jobsets.

This patch changes this and adds a warning to avoid confusion. In the
future, we may want to restrict jobset inputs for flakes to string
values only.
2024-08-27 09:08:45 +02:00
Maximilian Bosch 76e01ad59b
Add 'private' flag to projects to display project & all associated things for authenticated users only 2024-08-27 09:08:45 +02:00
Pierre Bourdon 8d5d4942e1
queue-runner: remove unused method from State 2024-08-27 02:57:37 +02:00
Pierre Bourdon e5a8ee5c17
web: require permissions for /api/push 2024-08-27 02:57:16 +02:00
Pierre Bourdon fd7fd0ad65
treewide: clang-tidy modernize 2024-08-27 01:33:12 +02:00
Pierre Bourdon d3fcedbcf5
treewide: enable clang-tidy bugprone findings
Fix some trivial findings throughout the codebase, mostly making
implicit casts explicit.
2024-08-27 00:43:17 +02:00
Pierre Bourdon 3891ad77e3
queue-runner: change Machine object creation to work around clang bug
https://github.com/llvm/llvm-project/issues/106123
2024-08-26 22:34:48 +02:00
Pierre Bourdon 21fd1f8993
flake: add devShells, including a clang one for clang-tidy & more 2024-08-26 22:22:18 +02:00
emily ab6d81fad4
api: fix github webhook 2024-08-26 20:26:21 +02:00
Sandro 64df0cba47
Match URIs that don't end in .git
Co-authored-by: Charlotte <lotte@chir.rs>
2024-08-26 20:26:21 +02:00
Sandro Jäckel 6179b298cb
Add gitea push hook 2024-08-26 20:26:20 +02:00
Pierre Bourdon 44b9a7b95d
queue-runner: handle broken pg pool connections in builder code
Completes 9b62c52e5c with another location
that was initially missed.
2024-08-25 22:05:13 +02:00
40 changed files with 713 additions and 251 deletions

12
.clang-tidy Normal file
View file

@ -0,0 +1,12 @@
UseColor: true
Checks:
- -*
- bugprone-*
# kind of nonsense
- -bugprone-easily-swappable-parameters
# many warnings due to not recognizing `assert` properly
- -bugprone-unchecked-optional-access
- modernize-*
- -modernize-use-trailing-return-type

View file

@ -312,6 +312,21 @@ Declarative Projects
see this [chapter](./plugins/declarative-projects.md) see this [chapter](./plugins/declarative-projects.md)
Private Projects
----------------
By checking the `Private` checkbox in the project creation form, a project
and everything related to it (jobsets, evals, builds, etc.) can only be accessed
if a user is authenticated. Otherwise, a 404 will be returned by the API and Web
UI. This is the main difference to "hidden" projects where everything can
be obtained if the URLs are known.
Please note that the store paths that are realized in evaluations that belong to
private projects aren't protected! It is assumed that the hashes are unknown
and thus inaccessible. For a real protection of the binary cache it's recommended
to either use `nix.sshServe` instead or to protect the routes `/nar/*` and `*.narinfo`
with a reverse proxy.
Email Notifications Email Notifications
------------------- -------------------

View file

@ -1,9 +1,12 @@
# Webhooks # Webhooks
Hydra can be notified by github's webhook to trigger a new evaluation when a Hydra can be notified by github or gitea with webhooks to trigger a new evaluation when a
jobset has a github repo in its input. jobset has a github repo in its input.
To set up a github webhook go to `https://github.com/<yourhandle>/<yourrepo>/settings` and in the `Webhooks` tab
click on `Add webhook`. ## GitHub
To set up a webhook for a GitHub repository go to `https://github.com/<yourhandle>/<yourrepo>/settings`
and in the `Webhooks` tab click on `Add webhook`.
- In `Payload URL` fill in `https://<your-hydra-domain>/api/push-github`. - In `Payload URL` fill in `https://<your-hydra-domain>/api/push-github`.
- In `Content type` switch to `application/json`. - In `Content type` switch to `application/json`.
@ -11,3 +14,14 @@ click on `Add webhook`.
- For `Which events would you like to trigger this webhook?` keep the default option for events on `Just the push event.`. - For `Which events would you like to trigger this webhook?` keep the default option for events on `Just the push event.`.
Then add the hook with `Add webhook`. Then add the hook with `Add webhook`.
## Gitea
To set up a webhook for a Gitea repository go to the settings of the repository in your Gitea instance
and in the `Webhooks` tab click on `Add Webhook` and choose `Gitea` in the drop down.
- In `Target URL` fill in `https://<your-hydra-domain>/api/push-gitea`.
- Keep HTTP method `POST`, POST Content Type `application/json` and Trigger On `Push Events`.
- Change the branch filter to match the git branch hydra builds.
Then add the hook with `Add webhook`.

View file

@ -73,6 +73,21 @@
default = pkgsBySystem.${system}.hydra; default = pkgsBySystem.${system}.hydra;
}); });
devShells = forEachSystem (system: let
pkgs = pkgsBySystem.${system};
lib = pkgs.lib;
mkDevShell = stdenv: (pkgs.mkShell.override { inherit stdenv; }) {
inputsFrom = [ (self.packages.${system}.default.override { inherit stdenv; }) ];
packages =
lib.optional (stdenv.cc.isClang && stdenv.hostPlatform == stdenv.buildPlatform) pkgs.clang-tools;
};
in {
default = mkDevShell pkgs.stdenv;
clang = mkDevShell pkgs.clangStdenv;
});
nixosModules = import ./nixos-modules { nixosModules = import ./nixos-modules {
overlays = overlayList; overlays = overlayList;
}; };

View file

@ -184,6 +184,9 @@ paths:
visible: visible:
description: when set to true the project is displayed in the web interface description: when set to true the project is displayed in the web interface
type: boolean type: boolean
private:
description: when set to true the project and all related objects are only accessible to authenticated users
type: boolean
declarative: declarative:
description: declarative input configured for this project description: declarative input configured for this project
type: object type: object
@ -625,6 +628,9 @@ components:
hidden: hidden:
description: when set to true the project is not displayed in the web interface description: when set to true the project is not displayed in the web interface
type: boolean type: boolean
private:
description: when set to true the project and all related objects are only accessible to authenticated users
type: boolean
enabled: enabled:
description: when set to true the project gets scheduled for evaluation description: when set to true the project gets scheduled for evaluation
type: boolean type: boolean

View file

@ -98,6 +98,7 @@ let
FileLibMagic FileLibMagic
FileSlurper FileSlurper
FileWhich FileWhich
HTMLTreeBuilderXPath
IOCompress IOCompress
IPCRun IPCRun
IPCRun3 IPCRun3

View file

@ -1,21 +0,0 @@
# IMPORTANT: if you delete this file your app will not work as
# expected. you have been warned
use strict;
use warnings;
use inc::Module::Install;
name 'Hydra';
all_from 'lib/Hydra.pm';
requires 'Catalyst::Runtime' => '5.7015';
requires 'Catalyst::Plugin::ConfigLoader';
requires 'Catalyst::Plugin::Static::Simple';
requires 'Catalyst::Action::RenderView';
requires 'parent';
requires 'Config::General'; # This should reflect the config file format you've chosen
# See Catalyst::Plugin::ConfigLoader for supported formats
catalyst;
install_script glob('script/*.pl');
auto_install;
WriteAll;

View file

@ -14,11 +14,12 @@
#include <sys/wait.h> #include <sys/wait.h>
#include <boost/format.hpp> #include <boost/format.hpp>
#include <utility>
using namespace nix; using namespace nix;
using boost::format; using boost::format;
typedef std::pair<std::string, std::string> JobsetName; using JobsetName = std::pair<std::string, std::string>;
class JobsetId { class JobsetId {
public: public:
@ -28,8 +29,8 @@ class JobsetId {
int id; int id;
JobsetId(const std::string & project, const std::string & jobset, int id) JobsetId(std::string project, std::string jobset, int id)
: project{ project }, jobset{ jobset }, id{ id } : project{std::move( project )}, jobset{std::move( jobset )}, id{ id }
{ {
} }
@ -41,7 +42,7 @@ class JobsetId {
friend bool operator== (const JobsetId & lhs, const JobsetName & rhs); friend bool operator== (const JobsetId & lhs, const JobsetName & rhs);
friend bool operator!= (const JobsetId & lhs, const JobsetName & rhs); friend bool operator!= (const JobsetId & lhs, const JobsetName & rhs);
std::string display() const { [[nodiscard]] std::string display() const {
return str(format("%1%:%2% (jobset#%3%)") % project % jobset % id); return str(format("%1%:%2% (jobset#%3%)") % project % jobset % id);
} }
}; };
@ -88,11 +89,11 @@ struct Evaluator
JobsetId name; JobsetId name;
std::optional<EvaluationStyle> evaluation_style; std::optional<EvaluationStyle> evaluation_style;
time_t lastCheckedTime, triggerTime; time_t lastCheckedTime, triggerTime;
int checkInterval; time_t checkInterval;
Pid pid; Pid pid;
}; };
typedef std::map<JobsetId, Jobset> Jobsets; using Jobsets = std::map<JobsetId, Jobset>;
std::optional<JobsetName> evalOne; std::optional<JobsetName> evalOne;
@ -138,13 +139,15 @@ struct Evaluator
if (evalOne && name != *evalOne) continue; if (evalOne && name != *evalOne) continue;
auto res = state->jobsets.try_emplace(name, Jobset{name}); auto res = state->jobsets.try_emplace(name, Jobset{.name=name});
auto & jobset = res.first->second; auto & jobset = res.first->second;
jobset.lastCheckedTime = row["lastCheckedTime"].as<time_t>(0); jobset.lastCheckedTime = row["lastCheckedTime"].as<time_t>(0);
jobset.triggerTime = row["triggerTime"].as<time_t>(notTriggered); jobset.triggerTime = row["triggerTime"].as<time_t>(notTriggered);
jobset.checkInterval = row["checkInterval"].as<time_t>(); jobset.checkInterval = row["checkInterval"].as<time_t>();
switch (row["jobset_enabled"].as<int>(0)) {
int eval_style = row["jobset_enabled"].as<int>(0);
switch (eval_style) {
case 1: case 1:
jobset.evaluation_style = EvaluationStyle::SCHEDULE; jobset.evaluation_style = EvaluationStyle::SCHEDULE;
break; break;
@ -154,6 +157,9 @@ struct Evaluator
case 3: case 3:
jobset.evaluation_style = EvaluationStyle::ONE_AT_A_TIME; jobset.evaluation_style = EvaluationStyle::ONE_AT_A_TIME;
break; break;
default:
// Disabled or unknown. Leave as nullopt.
break;
} }
seen.insert(name); seen.insert(name);
@ -175,7 +181,7 @@ struct Evaluator
void startEval(State & state, Jobset & jobset) void startEval(State & state, Jobset & jobset)
{ {
time_t now = time(0); time_t now = time(nullptr);
printInfo("starting evaluation of jobset %s (last checked %d s ago)", printInfo("starting evaluation of jobset %s (last checked %d s ago)",
jobset.name.display(), jobset.name.display(),
@ -228,7 +234,7 @@ struct Evaluator
return false; return false;
} }
if (jobset.lastCheckedTime + jobset.checkInterval <= time(0)) { if (jobset.lastCheckedTime + jobset.checkInterval <= time(nullptr)) {
// Time to schedule a fresh evaluation. If the jobset // Time to schedule a fresh evaluation. If the jobset
// is a ONE_AT_A_TIME jobset, ensure the previous jobset // is a ONE_AT_A_TIME jobset, ensure the previous jobset
// has no remaining, unfinished work. // has no remaining, unfinished work.
@ -301,7 +307,7 @@ struct Evaluator
/* Put jobsets in order of ascending trigger time, last checked /* Put jobsets in order of ascending trigger time, last checked
time, and name. */ time, and name. */
std::sort(sorted.begin(), sorted.end(), std::ranges::sort(sorted,
[](const Jobsets::iterator & a, const Jobsets::iterator & b) { [](const Jobsets::iterator & a, const Jobsets::iterator & b) {
return return
a->second.triggerTime != b->second.triggerTime a->second.triggerTime != b->second.triggerTime
@ -324,7 +330,7 @@ struct Evaluator
while (true) { while (true) {
time_t now = time(0); time_t now = time(nullptr);
std::chrono::seconds sleepTime = std::chrono::seconds::max(); std::chrono::seconds sleepTime = std::chrono::seconds::max();
@ -411,7 +417,7 @@ struct Evaluator
printInfo("evaluation of jobset %s %s", printInfo("evaluation of jobset %s %s",
jobset.name.display(), statusToString(status)); jobset.name.display(), statusToString(status));
auto now = time(0); auto now = time(nullptr);
jobset.triggerTime = notTriggered; jobset.triggerTime = notTriggered;
jobset.lastCheckedTime = now; jobset.lastCheckedTime = now;

View file

@ -1,5 +1,6 @@
#include <algorithm> #include <algorithm>
#include <cmath> #include <cmath>
#include <ranges>
#include <sys/types.h> #include <sys/types.h>
#include <sys/stat.h> #include <sys/stat.h>
@ -41,6 +42,7 @@ static Strings extraStoreArgs(std::string & machine)
} }
} catch (BadURL &) { } catch (BadURL &) {
// We just try to continue with `machine->sshName` here for backwards compat. // We just try to continue with `machine->sshName` here for backwards compat.
printMsg(lvlWarn, "could not parse machine URL '%s', passing through to SSH", machine);
} }
return result; return result;
@ -133,8 +135,8 @@ static void copyClosureTo(
auto sorted = destStore.topoSortPaths(closure); auto sorted = destStore.topoSortPaths(closure);
StorePathSet missing; StorePathSet missing;
for (auto i = sorted.rbegin(); i != sorted.rend(); ++i) for (auto & i : std::ranges::reverse_view(sorted))
if (!present.count(*i)) missing.insert(*i); if (!present.count(i)) missing.insert(i);
printMsg(lvlDebug, "sending %d missing paths", missing.size()); printMsg(lvlDebug, "sending %d missing paths", missing.size());
@ -304,12 +306,12 @@ static BuildResult performBuild(
time_t startTime, stopTime; time_t startTime, stopTime;
startTime = time(0); startTime = time(nullptr);
{ {
MaintainCount<counter> mc(nrStepsBuilding); MaintainCount<counter> mc(nrStepsBuilding);
result = ServeProto::Serialise<BuildResult>::read(localStore, conn); result = ServeProto::Serialise<BuildResult>::read(localStore, conn);
} }
stopTime = time(0); stopTime = time(nullptr);
if (!result.startTime) { if (!result.startTime) {
// If the builder gave `startTime = 0`, use our measurements // If the builder gave `startTime = 0`, use our measurements
@ -338,10 +340,10 @@ static BuildResult performBuild(
// were known // were known
assert(outputPath); assert(outputPath);
auto outputHash = outputHashes.at(outputName); auto outputHash = outputHashes.at(outputName);
auto drvOutput = DrvOutput { outputHash, outputName }; auto drvOutput = DrvOutput { .drvHash=outputHash, .outputName=outputName };
result.builtOutputs.insert_or_assign( result.builtOutputs.insert_or_assign(
std::move(outputName), std::move(outputName),
Realisation { drvOutput, *outputPath }); Realisation { .id=drvOutput, .outPath=*outputPath });
} }
} }
@ -634,7 +636,7 @@ void State::buildRemote(ref<Store> destStore,
* copying outputs and we end up building too many things that we * copying outputs and we end up building too many things that we
* haven't been able to allow copy slots for. */ * haven't been able to allow copy slots for. */
assert(reservation.unique()); assert(reservation.unique());
reservation = 0; reservation = nullptr;
wakeDispatcher(); wakeDispatcher();
StorePathSet outputs; StorePathSet outputs;
@ -697,7 +699,7 @@ void State::buildRemote(ref<Store> destStore,
if (info->consecutiveFailures == 0 || info->lastFailure < now - std::chrono::seconds(30)) { if (info->consecutiveFailures == 0 || info->lastFailure < now - std::chrono::seconds(30)) {
info->consecutiveFailures = std::min(info->consecutiveFailures + 1, (unsigned int) 4); info->consecutiveFailures = std::min(info->consecutiveFailures + 1, (unsigned int) 4);
info->lastFailure = now; info->lastFailure = now;
int delta = retryInterval * std::pow(retryBackoff, info->consecutiveFailures - 1) + (rand() % 30); int delta = static_cast<int>(retryInterval * std::pow(retryBackoff, info->consecutiveFailures - 1) + (rand() % 30));
printMsg(lvlInfo, "will disable machine %1% for %2%s", machine->sshName, delta); printMsg(lvlInfo, "will disable machine %1% for %2%s", machine->sshName, delta);
info->disabledUntil = now + std::chrono::seconds(delta); info->disabledUntil = now + std::chrono::seconds(delta);
} }

View file

@ -35,10 +35,18 @@ void State::builder(MachineReservation::ptr reservation)
activeSteps_.lock()->erase(activeStep); activeSteps_.lock()->erase(activeStep);
}); });
auto conn(dbPool.get());
try { try {
auto destStore = getDestStore(); auto destStore = getDestStore();
// Might release the reservation. // Might release the reservation.
res = doBuildStep(destStore, reservation, activeStep); res = doBuildStep(destStore, reservation, *conn, activeStep);
} catch (pqxx::broken_connection & e) {
printMsg(lvlError, "db lost while building %s on %s: %s (retriable)",
localStore->printStorePath(activeStep->step->drvPath),
reservation ? reservation->machine->sshName : std::string("(no machine)"),
e.what());
conn.markBad();
} catch (std::exception & e) { } catch (std::exception & e) {
printMsg(lvlError, "uncaught exception building %s on %s: %s", printMsg(lvlError, "uncaught exception building %s on %s: %s",
localStore->printStorePath(activeStep->step->drvPath), localStore->printStorePath(activeStep->step->drvPath),
@ -50,7 +58,7 @@ void State::builder(MachineReservation::ptr reservation)
/* If the machine hasn't been released yet, release and wake up the dispatcher. */ /* If the machine hasn't been released yet, release and wake up the dispatcher. */
if (reservation) { if (reservation) {
assert(reservation.unique()); assert(reservation.unique());
reservation = 0; reservation = nullptr;
wakeDispatcher(); wakeDispatcher();
} }
@ -64,7 +72,7 @@ void State::builder(MachineReservation::ptr reservation)
step_->tries++; step_->tries++;
nrRetries++; nrRetries++;
if (step_->tries > maxNrRetries) maxNrRetries = step_->tries; // yeah yeah, not atomic if (step_->tries > maxNrRetries) maxNrRetries = step_->tries; // yeah yeah, not atomic
int delta = retryInterval * std::pow(retryBackoff, step_->tries - 1) + (rand() % 10); int delta = static_cast<int>(retryInterval * std::pow(retryBackoff, step_->tries - 1) + (rand() % 10));
printMsg(lvlInfo, "will retry %s after %ss", localStore->printStorePath(step->drvPath), delta); printMsg(lvlInfo, "will retry %s after %ss", localStore->printStorePath(step->drvPath), delta);
step_->after = std::chrono::system_clock::now() + std::chrono::seconds(delta); step_->after = std::chrono::system_clock::now() + std::chrono::seconds(delta);
} }
@ -76,6 +84,7 @@ void State::builder(MachineReservation::ptr reservation)
State::StepResult State::doBuildStep(nix::ref<Store> destStore, State::StepResult State::doBuildStep(nix::ref<Store> destStore,
MachineReservation::ptr & reservation, MachineReservation::ptr & reservation,
Connection & conn,
std::shared_ptr<ActiveStep> activeStep) std::shared_ptr<ActiveStep> activeStep)
{ {
auto step(reservation->step); auto step(reservation->step);
@ -106,8 +115,6 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
buildOptions.maxLogSize = maxLogSize; buildOptions.maxLogSize = maxLogSize;
buildOptions.enforceDeterminism = step->isDeterministic; buildOptions.enforceDeterminism = step->isDeterministic;
auto conn(dbPool.get());
{ {
std::set<Build::ptr> dependents; std::set<Build::ptr> dependents;
std::set<Step::ptr> steps; std::set<Step::ptr> steps;
@ -132,7 +139,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
for (auto build2 : dependents) { for (auto build2 : dependents) {
if (build2->drvPath == step->drvPath) { if (build2->drvPath == step->drvPath) {
build = build2; build = build2;
pqxx::work txn(*conn); pqxx::work txn(conn);
notifyBuildStarted(txn, build->id); notifyBuildStarted(txn, build->id);
txn.commit(); txn.commit();
} }
@ -183,11 +190,11 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
} }
}); });
time_t stepStartTime = result.startTime = time(0); time_t stepStartTime = result.startTime = time(nullptr);
/* If any of the outputs have previously failed, then don't bother /* If any of the outputs have previously failed, then don't bother
building again. */ building again. */
if (checkCachedFailure(step, *conn)) if (checkCachedFailure(step, conn))
result.stepStatus = bsCachedFailure; result.stepStatus = bsCachedFailure;
else { else {
@ -195,13 +202,13 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
building. */ building. */
{ {
auto mc = startDbUpdate(); auto mc = startDbUpdate();
pqxx::work txn(*conn); pqxx::work txn(conn);
stepNr = createBuildStep(txn, result.startTime, buildId, step, machine->sshName, bsBusy); stepNr = createBuildStep(txn, result.startTime, buildId, step, machine->sshName, bsBusy);
txn.commit(); txn.commit();
} }
auto updateStep = [&](StepState stepState) { auto updateStep = [&](StepState stepState) {
pqxx::work txn(*conn); pqxx::work txn(conn);
updateBuildStep(txn, buildId, stepNr, stepState); updateBuildStep(txn, buildId, stepNr, stepState);
txn.commit(); txn.commit();
}; };
@ -230,7 +237,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
} }
} }
time_t stepStopTime = time(0); time_t stepStopTime = time(nullptr);
if (!result.stopTime) result.stopTime = stepStopTime; if (!result.stopTime) result.stopTime = stepStopTime;
/* For standard failures, we don't care about the error /* For standard failures, we don't care about the error
@ -244,7 +251,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
auto step_(step->state.lock()); auto step_(step->state.lock());
if (!step_->jobsets.empty()) { if (!step_->jobsets.empty()) {
// FIXME: loss of precision. // FIXME: loss of precision.
time_t charge = (result.stopTime - result.startTime) / step_->jobsets.size(); time_t charge = (result.stopTime - result.startTime) / static_cast<time_t>(step_->jobsets.size());
for (auto & jobset : step_->jobsets) for (auto & jobset : step_->jobsets)
jobset->addStep(result.startTime, charge); jobset->addStep(result.startTime, charge);
} }
@ -252,7 +259,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
/* Finish the step in the database. */ /* Finish the step in the database. */
if (stepNr) { if (stepNr) {
pqxx::work txn(*conn); pqxx::work txn(conn);
finishBuildStep(txn, result, buildId, stepNr, machine->sshName); finishBuildStep(txn, result, buildId, stepNr, machine->sshName);
txn.commit(); txn.commit();
} }
@ -328,7 +335,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
{ {
auto mc = startDbUpdate(); auto mc = startDbUpdate();
pqxx::work txn(*conn); pqxx::work txn(conn);
for (auto & b : direct) { for (auto & b : direct) {
printInfo("marking build %1% as succeeded", b->id); printInfo("marking build %1% as succeeded", b->id);
@ -356,7 +363,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
/* Send notification about the builds that have this step as /* Send notification about the builds that have this step as
the top-level. */ the top-level. */
{ {
pqxx::work txn(*conn); pqxx::work txn(conn);
for (auto id : buildIDs) for (auto id : buildIDs)
notifyBuildFinished(txn, id, {}); notifyBuildFinished(txn, id, {});
txn.commit(); txn.commit();
@ -385,7 +392,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
} }
} else } else
failStep(*conn, step, buildId, result, machine, stepFinished); failStep(conn, step, buildId, result, machine, stepFinished);
// FIXME: keep stats about aborted steps? // FIXME: keep stats about aborted steps?
nrStepsDone++; nrStepsDone++;

View file

@ -46,7 +46,7 @@ void State::dispatcher()
auto t_after_work = std::chrono::steady_clock::now(); auto t_after_work = std::chrono::steady_clock::now();
prom.dispatcher_time_spent_running.Increment( prom.dispatcher_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count()); static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count()));
dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(t_after_work - t_before_work).count(); dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(t_after_work - t_before_work).count();
/* Sleep until we're woken up (either because a runnable build /* Sleep until we're woken up (either because a runnable build
@ -63,7 +63,7 @@ void State::dispatcher()
auto t_after_sleep = std::chrono::steady_clock::now(); auto t_after_sleep = std::chrono::steady_clock::now();
prom.dispatcher_time_spent_waiting.Increment( prom.dispatcher_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count()); static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count()));
} catch (std::exception & e) { } catch (std::exception & e) {
printError("dispatcher: %s", e.what()); printError("dispatcher: %s", e.what());
@ -190,7 +190,7 @@ system_time State::doDispatch()
} }
} }
sort(runnableSorted.begin(), runnableSorted.end(), std::ranges::sort(runnableSorted,
[](const StepInfo & a, const StepInfo & b) [](const StepInfo & a, const StepInfo & b)
{ {
return return
@ -240,11 +240,11 @@ system_time State::doDispatch()
- Then by speed factor. - Then by speed factor.
- Finally by load. */ - Finally by load. */
sort(machinesSorted.begin(), machinesSorted.end(), std::ranges::sort(machinesSorted,
[](const MachineInfo & a, const MachineInfo & b) -> bool [](const MachineInfo & a, const MachineInfo & b) -> bool
{ {
float ta = std::round(a.currentJobs / a.machine->speedFactorFloat); float ta = std::round(static_cast<float>(a.currentJobs) / a.machine->speedFactorFloat);
float tb = std::round(b.currentJobs / b.machine->speedFactorFloat); float tb = std::round(static_cast<float>(b.currentJobs) / b.machine->speedFactorFloat);
return return
ta != tb ? ta < tb : ta != tb ? ta < tb :
a.machine->speedFactorFloat != b.machine->speedFactorFloat ? a.machine->speedFactorFloat > b.machine->speedFactorFloat : a.machine->speedFactorFloat != b.machine->speedFactorFloat ? a.machine->speedFactorFloat > b.machine->speedFactorFloat :
@ -345,7 +345,7 @@ void State::abortUnsupported()
auto machines2 = *machines.lock(); auto machines2 = *machines.lock();
system_time now = std::chrono::system_clock::now(); system_time now = std::chrono::system_clock::now();
auto now2 = time(0); auto now2 = time(nullptr);
std::unordered_set<Step::ptr> aborted; std::unordered_set<Step::ptr> aborted;
@ -436,7 +436,7 @@ void Jobset::addStep(time_t startTime, time_t duration)
void Jobset::pruneSteps() void Jobset::pruneSteps()
{ {
time_t now = time(0); time_t now = time(nullptr);
auto steps_(steps.lock()); auto steps_(steps.lock());
while (!steps_->empty()) { while (!steps_->empty()) {
auto i = steps_->begin(); auto i = steps_->begin();
@ -464,7 +464,7 @@ State::MachineReservation::~MachineReservation()
auto prev = machine->state->currentJobs--; auto prev = machine->state->currentJobs--;
assert(prev); assert(prev);
if (prev == 1) if (prev == 1)
machine->state->idleSince = time(0); machine->state->idleSince = time(nullptr);
{ {
auto machineTypes_(state.machineTypes.lock()); auto machineTypes_(state.machineTypes.lock());

View file

@ -14,7 +14,7 @@ struct BuildProduct
bool isRegular = false; bool isRegular = false;
std::optional<nix::Hash> sha256hash; std::optional<nix::Hash> sha256hash;
std::optional<off_t> fileSize; std::optional<off_t> fileSize;
BuildProduct() { } BuildProduct() = default;
}; };
struct BuildMetric struct BuildMetric

View file

@ -105,7 +105,7 @@ State::State(std::optional<std::string> metricsAddrOpt)
: config(std::make_unique<HydraConfig>()) : config(std::make_unique<HydraConfig>())
, maxUnsupportedTime(config->getIntOption("max_unsupported_time", 0)) , maxUnsupportedTime(config->getIntOption("max_unsupported_time", 0))
, dbPool(config->getIntOption("max_db_connections", 128)) , dbPool(config->getIntOption("max_db_connections", 128))
, localWorkThrottler(config->getIntOption("max_local_worker_threads", std::min(maxSupportedLocalWorkers, std::max(4u, std::thread::hardware_concurrency()) - 2))) , localWorkThrottler(static_cast<ptrdiff_t>(config->getIntOption("max_local_worker_threads", std::min(maxSupportedLocalWorkers, std::max(4u, std::thread::hardware_concurrency()) - 2))))
, maxOutputSize(config->getIntOption("max_output_size", 2ULL << 30)) , maxOutputSize(config->getIntOption("max_output_size", 2ULL << 30))
, maxLogSize(config->getIntOption("max_log_size", 64ULL << 20)) , maxLogSize(config->getIntOption("max_log_size", 64ULL << 20))
, uploadLogsToBinaryCache(config->getBoolOption("upload_logs_to_binary_cache", false)) , uploadLogsToBinaryCache(config->getBoolOption("upload_logs_to_binary_cache", false))
@ -138,7 +138,7 @@ nix::MaintainCount<counter> State::startDbUpdate()
{ {
if (nrActiveDbUpdates > 6) if (nrActiveDbUpdates > 6)
printError("warning: %d concurrent database updates; PostgreSQL may be stalled", nrActiveDbUpdates.load()); printError("warning: %d concurrent database updates; PostgreSQL may be stalled", nrActiveDbUpdates.load());
return MaintainCount<counter>(nrActiveDbUpdates); return {nrActiveDbUpdates};
} }
@ -171,9 +171,9 @@ void State::parseMachines(const std::string & contents)
for (auto & f : mandatoryFeatures) for (auto & f : mandatoryFeatures)
supportedFeatures.insert(f); supportedFeatures.insert(f);
using MaxJobs = std::remove_const<decltype(nix::Machine::maxJobs)>::type; using MaxJobs = std::remove_const_t<decltype(nix::Machine::maxJobs)>;
auto machine = std::make_shared<::Machine>(nix::Machine { auto machine = std::make_shared<::Machine>(::Machine {{
// `storeUri`, not yet used // `storeUri`, not yet used
"", "",
// `systemTypes`, not yet used // `systemTypes`, not yet used
@ -194,11 +194,11 @@ void State::parseMachines(const std::string & contents)
tokens[7] != "" && tokens[7] != "-" tokens[7] != "" && tokens[7] != "-"
? base64Decode(tokens[7]) ? base64Decode(tokens[7])
: "", : "",
}); }});
machine->sshName = tokens[0]; machine->sshName = tokens[0];
machine->systemTypesSet = tokenizeString<StringSet>(tokens[1], ","); machine->systemTypesSet = tokenizeString<StringSet>(tokens[1], ",");
machine->speedFactorFloat = atof(tokens[4].c_str()); machine->speedFactorFloat = static_cast<float>(atof(tokens[4].c_str()));
/* Re-use the State object of the previous machine with the /* Re-use the State object of the previous machine with the
same name. */ same name. */
@ -412,7 +412,7 @@ void State::finishBuildStep(pqxx::work & txn, const RemoteResult & result,
} }
int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime, unsigned int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
Build::ptr build, const StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const StorePath & storePath) Build::ptr build, const StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const StorePath & storePath)
{ {
restart: restart:
@ -594,7 +594,7 @@ std::shared_ptr<PathLocks> State::acquireGlobalLock()
createDirs(dirOf(lockPath)); createDirs(dirOf(lockPath));
auto lock = std::make_shared<PathLocks>(); auto lock = std::make_shared<PathLocks>();
if (!lock->lockPaths(PathSet({lockPath}), "", false)) return 0; if (!lock->lockPaths(PathSet({lockPath}), "", false)) return nullptr;
return lock; return lock;
} }
@ -602,10 +602,10 @@ std::shared_ptr<PathLocks> State::acquireGlobalLock()
void State::dumpStatus(Connection & conn) void State::dumpStatus(Connection & conn)
{ {
time_t now = time(0); time_t now = time(nullptr);
json statusJson = { json statusJson = {
{"status", "up"}, {"status", "up"},
{"time", time(0)}, {"time", time(nullptr)},
{"uptime", now - startedAt}, {"uptime", now - startedAt},
{"pid", getpid()}, {"pid", getpid()},
@ -620,7 +620,7 @@ void State::dumpStatus(Connection & conn)
{"bytesReceived", bytesReceived.load()}, {"bytesReceived", bytesReceived.load()},
{"nrBuildsRead", nrBuildsRead.load()}, {"nrBuildsRead", nrBuildsRead.load()},
{"buildReadTimeMs", buildReadTimeMs.load()}, {"buildReadTimeMs", buildReadTimeMs.load()},
{"buildReadTimeAvgMs", nrBuildsRead == 0 ? 0.0 : (float) buildReadTimeMs / nrBuildsRead}, {"buildReadTimeAvgMs", nrBuildsRead == 0 ? 0.0 : (float) buildReadTimeMs / (float) nrBuildsRead},
{"nrBuildsDone", nrBuildsDone.load()}, {"nrBuildsDone", nrBuildsDone.load()},
{"nrStepsStarted", nrStepsStarted.load()}, {"nrStepsStarted", nrStepsStarted.load()},
{"nrStepsDone", nrStepsDone.load()}, {"nrStepsDone", nrStepsDone.load()},
@ -629,7 +629,7 @@ void State::dumpStatus(Connection & conn)
{"nrQueueWakeups", nrQueueWakeups.load()}, {"nrQueueWakeups", nrQueueWakeups.load()},
{"nrDispatcherWakeups", nrDispatcherWakeups.load()}, {"nrDispatcherWakeups", nrDispatcherWakeups.load()},
{"dispatchTimeMs", dispatchTimeMs.load()}, {"dispatchTimeMs", dispatchTimeMs.load()},
{"dispatchTimeAvgMs", nrDispatcherWakeups == 0 ? 0.0 : (float) dispatchTimeMs / nrDispatcherWakeups}, {"dispatchTimeAvgMs", nrDispatcherWakeups == 0 ? 0.0 : (float) dispatchTimeMs / (float) nrDispatcherWakeups},
{"nrDbConnections", dbPool.count()}, {"nrDbConnections", dbPool.count()},
{"nrActiveDbUpdates", nrActiveDbUpdates.load()}, {"nrActiveDbUpdates", nrActiveDbUpdates.load()},
}; };
@ -649,8 +649,8 @@ void State::dumpStatus(Connection & conn)
if (nrStepsDone) { if (nrStepsDone) {
statusJson["totalStepTime"] = totalStepTime.load(); statusJson["totalStepTime"] = totalStepTime.load();
statusJson["totalStepBuildTime"] = totalStepBuildTime.load(); statusJson["totalStepBuildTime"] = totalStepBuildTime.load();
statusJson["avgStepTime"] = (float) totalStepTime / nrStepsDone; statusJson["avgStepTime"] = (float) totalStepTime / (float) nrStepsDone;
statusJson["avgStepBuildTime"] = (float) totalStepBuildTime / nrStepsDone; statusJson["avgStepBuildTime"] = (float) totalStepBuildTime / (float) nrStepsDone;
} }
{ {
@ -677,8 +677,8 @@ void State::dumpStatus(Connection & conn)
if (m->state->nrStepsDone) { if (m->state->nrStepsDone) {
machine["totalStepTime"] = s->totalStepTime.load(); machine["totalStepTime"] = s->totalStepTime.load();
machine["totalStepBuildTime"] = s->totalStepBuildTime.load(); machine["totalStepBuildTime"] = s->totalStepBuildTime.load();
machine["avgStepTime"] = (float) s->totalStepTime / s->nrStepsDone; machine["avgStepTime"] = (float) s->totalStepTime / (float) s->nrStepsDone;
machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / s->nrStepsDone; machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / (float) s->nrStepsDone;
} }
statusJson["machines"][m->sshName] = machine; statusJson["machines"][m->sshName] = machine;
} }
@ -706,7 +706,7 @@ void State::dumpStatus(Connection & conn)
}; };
if (i.second.runnable > 0) if (i.second.runnable > 0)
machineTypeJson["waitTime"] = i.second.waitTime.count() + machineTypeJson["waitTime"] = i.second.waitTime.count() +
i.second.runnable * (time(0) - lastDispatcherCheck); i.second.runnable * (time(nullptr) - lastDispatcherCheck);
if (i.second.running == 0) if (i.second.running == 0)
machineTypeJson["lastActive"] = std::chrono::system_clock::to_time_t(i.second.lastActive); machineTypeJson["lastActive"] = std::chrono::system_clock::to_time_t(i.second.lastActive);
} }
@ -732,11 +732,11 @@ void State::dumpStatus(Connection & conn)
{"narWriteCompressionTimeMs", stats.narWriteCompressionTimeMs.load()}, {"narWriteCompressionTimeMs", stats.narWriteCompressionTimeMs.load()},
{"narCompressionSavings", {"narCompressionSavings",
stats.narWriteBytes stats.narWriteBytes
? 1.0 - (double) stats.narWriteCompressedBytes / stats.narWriteBytes ? 1.0 - (double) stats.narWriteCompressedBytes / (double) stats.narWriteBytes
: 0.0}, : 0.0},
{"narCompressionSpeed", // MiB/s {"narCompressionSpeed", // MiB/s
stats.narWriteCompressionTimeMs stats.narWriteCompressionTimeMs
? (double) stats.narWriteBytes / stats.narWriteCompressionTimeMs * 1000.0 / (1024.0 * 1024.0) ? (double) stats.narWriteBytes / (double) stats.narWriteCompressionTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0}, : 0.0},
}; };
@ -749,20 +749,20 @@ void State::dumpStatus(Connection & conn)
{"putTimeMs", s3Stats.putTimeMs.load()}, {"putTimeMs", s3Stats.putTimeMs.load()},
{"putSpeed", {"putSpeed",
s3Stats.putTimeMs s3Stats.putTimeMs
? (double) s3Stats.putBytes / s3Stats.putTimeMs * 1000.0 / (1024.0 * 1024.0) ? (double) s3Stats.putBytes / (double) s3Stats.putTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0}, : 0.0},
{"get", s3Stats.get.load()}, {"get", s3Stats.get.load()},
{"getBytes", s3Stats.getBytes.load()}, {"getBytes", s3Stats.getBytes.load()},
{"getTimeMs", s3Stats.getTimeMs.load()}, {"getTimeMs", s3Stats.getTimeMs.load()},
{"getSpeed", {"getSpeed",
s3Stats.getTimeMs s3Stats.getTimeMs
? (double) s3Stats.getBytes / s3Stats.getTimeMs * 1000.0 / (1024.0 * 1024.0) ? (double) s3Stats.getBytes / (double) s3Stats.getTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0}, : 0.0},
{"head", s3Stats.head.load()}, {"head", s3Stats.head.load()},
{"costDollarApprox", {"costDollarApprox",
(s3Stats.get + s3Stats.head) / 10000.0 * 0.004 (double) (s3Stats.get + s3Stats.head) / 10000.0 * 0.004
+ s3Stats.put / 1000.0 * 0.005 + + (double) s3Stats.put / 1000.0 * 0.005 +
+ s3Stats.getBytes / (1024.0 * 1024.0 * 1024.0) * 0.09}, + (double) s3Stats.getBytes / (1024.0 * 1024.0 * 1024.0) * 0.09},
}; };
} }
} }
@ -848,7 +848,7 @@ void State::run(BuildID buildOne)
/* Can't be bothered to shut down cleanly. Goodbye! */ /* Can't be bothered to shut down cleanly. Goodbye! */
auto callback = createInterruptCallback([&]() { std::_Exit(0); }); auto callback = createInterruptCallback([&]() { std::_Exit(0); });
startedAt = time(0); startedAt = time(nullptr);
this->buildOne = buildOne; this->buildOne = buildOne;
auto lock = acquireGlobalLock(); auto lock = acquireGlobalLock();

View file

@ -3,6 +3,7 @@
#include "archive.hh" #include "archive.hh"
#include <unordered_set> #include <unordered_set>
#include <utility>
using namespace nix; using namespace nix;
@ -18,8 +19,8 @@ struct Extractor : ParseSink
NarMemberData * curMember = nullptr; NarMemberData * curMember = nullptr;
Path prefix; Path prefix;
Extractor(NarMemberDatas & members, const Path & prefix) Extractor(NarMemberDatas & members, Path prefix)
: members(members), prefix(prefix) : members(members), prefix(std::move(prefix))
{ } { }
void createDirectory(const Path & path) override void createDirectory(const Path & path) override

View file

@ -13,7 +13,7 @@ struct NarMemberData
std::optional<nix::Hash> sha256; std::optional<nix::Hash> sha256;
}; };
typedef std::map<nix::Path, NarMemberData> NarMemberDatas; using NarMemberDatas = std::map<nix::Path, NarMemberData>;
/* Read a NAR from a source and get to some info about every file /* Read a NAR from a source and get to some info about every file
inside the NAR. */ inside the NAR. */

View file

@ -4,7 +4,8 @@
#include "thread-pool.hh" #include "thread-pool.hh"
#include <cstring> #include <cstring>
#include <signal.h> #include <utility>
#include <csignal>
using namespace nix; using namespace nix;
@ -52,7 +53,7 @@ void State::queueMonitorLoop(Connection & conn)
auto t_after_work = std::chrono::steady_clock::now(); auto t_after_work = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_running.Increment( prom.queue_monitor_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count()); static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count()));
/* Sleep until we get notification from the database about an /* Sleep until we get notification from the database about an
event. */ event. */
@ -79,7 +80,7 @@ void State::queueMonitorLoop(Connection & conn)
auto t_after_sleep = std::chrono::steady_clock::now(); auto t_after_sleep = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_waiting.Increment( prom.queue_monitor_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count()); static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count()));
} }
exit(0); exit(0);
@ -88,7 +89,7 @@ void State::queueMonitorLoop(Connection & conn)
struct PreviousFailure : public std::exception { struct PreviousFailure : public std::exception {
Step::ptr step; Step::ptr step;
PreviousFailure(Step::ptr step) : step(step) { } PreviousFailure(Step::ptr step) : step(std::move(step)) { }
}; };
@ -117,7 +118,7 @@ bool State::getQueuedBuilds(Connection & conn,
for (auto const & row : res) { for (auto const & row : res) {
auto builds_(builds.lock()); auto builds_(builds.lock());
BuildID id = row["id"].as<BuildID>(); auto id = row["id"].as<BuildID>();
if (buildOne && id != buildOne) continue; if (buildOne && id != buildOne) continue;
if (builds_->count(id)) continue; if (builds_->count(id)) continue;
@ -137,7 +138,7 @@ bool State::getQueuedBuilds(Connection & conn,
newIDs.push_back(id); newIDs.push_back(id);
newBuildsByID[id] = build; newBuildsByID[id] = build;
newBuildsByPath.emplace(std::make_pair(build->drvPath, id)); newBuildsByPath.emplace(build->drvPath, id);
} }
} }
@ -162,7 +163,7 @@ bool State::getQueuedBuilds(Connection & conn,
("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $3 where id = $1 and finished = 0", ("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $3 where id = $1 and finished = 0",
build->id, build->id,
(int) bsAborted, (int) bsAborted,
time(0)); time(nullptr));
txn.commit(); txn.commit();
build->finishedInDB = true; build->finishedInDB = true;
nrBuildsDone++; nrBuildsDone++;
@ -176,7 +177,7 @@ bool State::getQueuedBuilds(Connection & conn,
/* Create steps for this derivation and its dependencies. */ /* Create steps for this derivation and its dependencies. */
try { try {
step = createStep(destStore, conn, build, build->drvPath, step = createStep(destStore, conn, build, build->drvPath,
build, 0, finishedDrvs, newSteps, newRunnable); build, nullptr, finishedDrvs, newSteps, newRunnable);
} catch (PreviousFailure & ex) { } catch (PreviousFailure & ex) {
/* Some step previously failed, so mark the build as /* Some step previously failed, so mark the build as
@ -221,7 +222,7 @@ bool State::getQueuedBuilds(Connection & conn,
"where id = $1 and finished = 0", "where id = $1 and finished = 0",
build->id, build->id,
(int) (ex.step->drvPath == build->drvPath ? bsFailed : bsDepFailed), (int) (ex.step->drvPath == build->drvPath ? bsFailed : bsDepFailed),
time(0)); time(nullptr));
notifyBuildFinished(txn, build->id, {}); notifyBuildFinished(txn, build->id, {});
txn.commit(); txn.commit();
build->finishedInDB = true; build->finishedInDB = true;
@ -254,7 +255,7 @@ bool State::getQueuedBuilds(Connection & conn,
{ {
auto mc = startDbUpdate(); auto mc = startDbUpdate();
pqxx::work txn(conn); pqxx::work txn(conn);
time_t now = time(0); time_t now = time(nullptr);
if (!buildOneDone && build->id == buildOne) buildOneDone = true; if (!buildOneDone && build->id == buildOne) buildOneDone = true;
printMsg(lvlInfo, "marking build %1% as succeeded (cached)", build->id); printMsg(lvlInfo, "marking build %1% as succeeded (cached)", build->id);
markSucceededBuild(txn, build, res, true, now, now); markSucceededBuild(txn, build, res, true, now, now);
@ -355,7 +356,7 @@ void State::processQueueChange(Connection & conn)
pqxx::work txn(conn); pqxx::work txn(conn);
auto res = txn.exec("select id, globalPriority from Builds where finished = 0"); auto res = txn.exec("select id, globalPriority from Builds where finished = 0");
for (auto const & row : res) for (auto const & row : res)
currentIds[row["id"].as<BuildID>()] = row["globalPriority"].as<BuildID>(); currentIds[row["id"].as<BuildID>()] = row["globalPriority"].as<int>();
} }
{ {
@ -438,7 +439,7 @@ Step::ptr State::createStep(ref<Store> destStore,
Build::ptr referringBuild, Step::ptr referringStep, std::set<StorePath> & finishedDrvs, Build::ptr referringBuild, Step::ptr referringStep, std::set<StorePath> & finishedDrvs,
std::set<Step::ptr> & newSteps, std::set<Step::ptr> & newRunnable) std::set<Step::ptr> & newSteps, std::set<Step::ptr> & newRunnable)
{ {
if (finishedDrvs.find(drvPath) != finishedDrvs.end()) return 0; if (finishedDrvs.find(drvPath) != finishedDrvs.end()) return nullptr;
/* Check if the requested step already exists. If not, create a /* Check if the requested step already exists. If not, create a
new step. In any case, make the step reachable from new step. In any case, make the step reachable from
@ -516,7 +517,7 @@ Step::ptr State::createStep(ref<Store> destStore,
std::map<DrvOutput, std::optional<StorePath>> paths; std::map<DrvOutput, std::optional<StorePath>> paths;
for (auto & [outputName, maybeOutputPath] : destStore->queryPartialDerivationOutputMap(drvPath, &*localStore)) { for (auto & [outputName, maybeOutputPath] : destStore->queryPartialDerivationOutputMap(drvPath, &*localStore)) {
auto outputHash = outputHashes.at(outputName); auto outputHash = outputHashes.at(outputName);
paths.insert({{outputHash, outputName}, maybeOutputPath}); paths.insert({{.drvHash=outputHash, .outputName=outputName}, maybeOutputPath});
} }
auto missing = getMissingRemotePaths(destStore, paths); auto missing = getMissingRemotePaths(destStore, paths);
@ -560,7 +561,7 @@ Step::ptr State::createStep(ref<Store> destStore,
auto & path = *pathOpt; auto & path = *pathOpt;
try { try {
time_t startTime = time(0); time_t startTime = time(nullptr);
if (localStore->isValidPath(path)) if (localStore->isValidPath(path))
printInfo("copying output %1% of %2% from local store", printInfo("copying output %1% of %2% from local store",
@ -578,7 +579,7 @@ Step::ptr State::createStep(ref<Store> destStore,
StorePathSet { path }, StorePathSet { path },
NoRepair, CheckSigs, NoSubstitute); NoRepair, CheckSigs, NoSubstitute);
time_t stopTime = time(0); time_t stopTime = time(nullptr);
{ {
auto mc = startDbUpdate(); auto mc = startDbUpdate();
@ -602,7 +603,7 @@ Step::ptr State::createStep(ref<Store> destStore,
// FIXME: check whether all outputs are in the binary cache. // FIXME: check whether all outputs are in the binary cache.
if (valid) { if (valid) {
finishedDrvs.insert(drvPath); finishedDrvs.insert(drvPath);
return 0; return nullptr;
} }
/* No, we need to build. */ /* No, we need to build. */
@ -610,7 +611,7 @@ Step::ptr State::createStep(ref<Store> destStore,
/* Create steps for the dependencies. */ /* Create steps for the dependencies. */
for (auto & i : step->drv->inputDrvs.map) { for (auto & i : step->drv->inputDrvs.map) {
auto dep = createStep(destStore, conn, build, i.first, 0, step, finishedDrvs, newSteps, newRunnable); auto dep = createStep(destStore, conn, build, i.first, nullptr, step, finishedDrvs, newSteps, newRunnable);
if (dep) { if (dep) {
auto step_(step->state.lock()); auto step_(step->state.lock());
step_->deps.insert(dep); step_->deps.insert(dep);
@ -658,11 +659,11 @@ Jobset::ptr State::createJobset(pqxx::work & txn,
auto res2 = txn.exec_params auto res2 = txn.exec_params
("select s.startTime, s.stopTime from BuildSteps s join Builds b on build = id " ("select s.startTime, s.stopTime from BuildSteps s join Builds b on build = id "
"where s.startTime is not null and s.stopTime > $1 and jobset_id = $2", "where s.startTime is not null and s.stopTime > $1 and jobset_id = $2",
time(0) - Jobset::schedulingWindow * 10, time(nullptr) - Jobset::schedulingWindow * 10,
jobsetID); jobsetID);
for (auto const & row : res2) { for (auto const & row : res2) {
time_t startTime = row["startTime"].as<time_t>(); auto startTime = row["startTime"].as<time_t>();
time_t stopTime = row["stopTime"].as<time_t>(); auto stopTime = row["stopTime"].as<time_t>();
jobset->addStep(startTime, stopTime - startTime); jobset->addStep(startTime, stopTime - startTime);
} }
@ -702,7 +703,7 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
"where finished = 1 and (buildStatus = 0 or buildStatus = 6) and path = $1", "where finished = 1 and (buildStatus = 0 or buildStatus = 6) and path = $1",
localStore->printStorePath(output)); localStore->printStorePath(output));
if (r.empty()) continue; if (r.empty()) continue;
BuildID id = r[0][0].as<BuildID>(); auto id = r[0][0].as<BuildID>();
printInfo("reusing build %d", id); printInfo("reusing build %d", id);

View file

@ -8,6 +8,7 @@
#include <queue> #include <queue>
#include <regex> #include <regex>
#include <semaphore> #include <semaphore>
#include <utility>
#include <prometheus/counter.h> #include <prometheus/counter.h>
#include <prometheus/gauge.h> #include <prometheus/gauge.h>
@ -26,16 +27,16 @@
#include "machines.hh" #include "machines.hh"
typedef unsigned int BuildID; using BuildID = unsigned int;
typedef unsigned int JobsetID; using JobsetID = unsigned int;
typedef std::chrono::time_point<std::chrono::system_clock> system_time; using system_time = std::chrono::time_point<std::chrono::system_clock>;
typedef std::atomic<unsigned long> counter; using counter = std::atomic<unsigned long>;
typedef enum { enum BuildStatus {
bsSuccess = 0, bsSuccess = 0,
bsFailed = 1, bsFailed = 1,
bsDepFailed = 2, // builds only bsDepFailed = 2, // builds only
@ -49,10 +50,10 @@ typedef enum {
bsNarSizeLimitExceeded = 11, bsNarSizeLimitExceeded = 11,
bsNotDeterministic = 12, bsNotDeterministic = 12,
bsBusy = 100, // not stored bsBusy = 100, // not stored
} BuildStatus; };
typedef enum { enum StepState {
ssPreparing = 1, ssPreparing = 1,
ssConnecting = 10, ssConnecting = 10,
ssSendingInputs = 20, ssSendingInputs = 20,
@ -60,7 +61,7 @@ typedef enum {
ssWaitingForLocalSlot = 35, ssWaitingForLocalSlot = 35,
ssReceivingOutputs = 40, ssReceivingOutputs = 40,
ssPostProcessing = 50, ssPostProcessing = 50,
} StepState; };
struct RemoteResult struct RemoteResult
@ -78,7 +79,7 @@ struct RemoteResult
unsigned int overhead = 0; unsigned int overhead = 0;
nix::Path logFile; nix::Path logFile;
BuildStatus buildStatus() const [[nodiscard]] BuildStatus buildStatus() const
{ {
return stepStatus == bsCachedFailure ? bsFailed : stepStatus; return stepStatus == bsCachedFailure ? bsFailed : stepStatus;
} }
@ -95,10 +96,10 @@ class Jobset
{ {
public: public:
typedef std::shared_ptr<Jobset> ptr; using ptr = std::shared_ptr<Jobset>;
typedef std::weak_ptr<Jobset> wptr; using wptr = std::weak_ptr<Jobset>;
static const time_t schedulingWindow = 24 * 60 * 60; static const time_t schedulingWindow = static_cast<time_t>(24 * 60 * 60);
private: private:
@ -115,7 +116,7 @@ public:
return (double) seconds / shares; return (double) seconds / shares;
} }
void setShares(int shares_) void setShares(unsigned int shares_)
{ {
assert(shares_ > 0); assert(shares_ > 0);
shares = shares_; shares = shares_;
@ -131,8 +132,8 @@ public:
struct Build struct Build
{ {
typedef std::shared_ptr<Build> ptr; using ptr = std::shared_ptr<Build>;
typedef std::weak_ptr<Build> wptr; using wptr = std::weak_ptr<Build>;
BuildID id; BuildID id;
nix::StorePath drvPath; nix::StorePath drvPath;
@ -163,8 +164,8 @@ struct Build
struct Step struct Step
{ {
typedef std::shared_ptr<Step> ptr; using ptr = std::shared_ptr<Step>;
typedef std::weak_ptr<Step> wptr; using wptr = std::weak_ptr<Step>;
nix::StorePath drvPath; nix::StorePath drvPath;
std::unique_ptr<nix::Derivation> drv; std::unique_ptr<nix::Derivation> drv;
@ -221,13 +222,8 @@ struct Step
nix::Sync<State> state; nix::Sync<State> state;
Step(const nix::StorePath & drvPath) : drvPath(drvPath) Step(nix::StorePath drvPath) : drvPath(std::move(drvPath))
{ } { }
~Step()
{
//printMsg(lvlError, format("destroying step %1%") % drvPath);
}
}; };
@ -239,7 +235,7 @@ void visitDependencies(std::function<void(Step::ptr)> visitor, Step::ptr step);
struct Machine : nix::Machine struct Machine : nix::Machine
{ {
typedef std::shared_ptr<Machine> ptr; using ptr = std::shared_ptr<Machine>;
/* TODO Get rid of: `nix::Machine::storeUri` is normalized in a way /* TODO Get rid of: `nix::Machine::storeUri` is normalized in a way
we are not yet used to, but once we are, we don't need this. */ we are not yet used to, but once we are, we don't need this. */
@ -254,7 +250,7 @@ struct Machine : nix::Machine
float speedFactorFloat = 1.0; float speedFactorFloat = 1.0;
struct State { struct State {
typedef std::shared_ptr<State> ptr; using ptr = std::shared_ptr<State>;
counter currentJobs{0}; counter currentJobs{0};
counter nrStepsDone{0}; counter nrStepsDone{0};
counter totalStepTime{0}; // total time for steps, including closure copying counter totalStepTime{0}; // total time for steps, including closure copying
@ -358,22 +354,22 @@ private:
bool useSubstitutes = false; bool useSubstitutes = false;
/* The queued builds. */ /* The queued builds. */
typedef std::map<BuildID, Build::ptr> Builds; using Builds = std::map<BuildID, Build::ptr>;
nix::Sync<Builds> builds; nix::Sync<Builds> builds;
/* The jobsets. */ /* The jobsets. */
typedef std::map<std::pair<std::string, std::string>, Jobset::ptr> Jobsets; using Jobsets = std::map<std::pair<std::string, std::string>, Jobset::ptr>;
nix::Sync<Jobsets> jobsets; nix::Sync<Jobsets> jobsets;
/* All active or pending build steps (i.e. dependencies of the /* All active or pending build steps (i.e. dependencies of the
queued builds). Note that these are weak pointers. Steps are queued builds). Note that these are weak pointers. Steps are
kept alive by being reachable from Builds or by being in kept alive by being reachable from Builds or by being in
progress. */ progress. */
typedef std::map<nix::StorePath, Step::wptr> Steps; using Steps = std::map<nix::StorePath, Step::wptr>;
nix::Sync<Steps> steps; nix::Sync<Steps> steps;
/* Build steps that have no unbuilt dependencies. */ /* Build steps that have no unbuilt dependencies. */
typedef std::list<Step::wptr> Runnable; using Runnable = std::list<Step::wptr>;
nix::Sync<Runnable> runnable; nix::Sync<Runnable> runnable;
/* CV for waking up the dispatcher. */ /* CV for waking up the dispatcher. */
@ -385,7 +381,7 @@ private:
/* The build machines. */ /* The build machines. */
std::mutex machinesReadyLock; std::mutex machinesReadyLock;
typedef std::map<std::string, Machine::ptr> Machines; using Machines = std::map<std::string, Machine::ptr>;
nix::Sync<Machines> machines; // FIXME: use atomic_shared_ptr nix::Sync<Machines> machines; // FIXME: use atomic_shared_ptr
/* Throttler for CPU-bound local work. */ /* Throttler for CPU-bound local work. */
@ -431,7 +427,7 @@ private:
struct MachineReservation struct MachineReservation
{ {
typedef std::shared_ptr<MachineReservation> ptr; using ptr = std::shared_ptr<MachineReservation>;
State & state; State & state;
Step::ptr step; Step::ptr step;
Machine::ptr machine; Machine::ptr machine;
@ -534,7 +530,7 @@ private:
void finishBuildStep(pqxx::work & txn, const RemoteResult & result, BuildID buildId, unsigned int stepNr, void finishBuildStep(pqxx::work & txn, const RemoteResult & result, BuildID buildId, unsigned int stepNr,
const std::string & machine); const std::string & machine);
int createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime, unsigned int createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
Build::ptr build, const nix::StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const nix::StorePath & storePath); Build::ptr build, const nix::StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const nix::StorePath & storePath);
void updateBuild(pqxx::work & txn, Build::ptr build, BuildStatus status); void updateBuild(pqxx::work & txn, Build::ptr build, BuildStatus status);
@ -594,6 +590,7 @@ private:
enum StepResult { sDone, sRetry, sMaybeCancelled }; enum StepResult { sDone, sRetry, sMaybeCancelled };
StepResult doBuildStep(nix::ref<nix::Store> destStore, StepResult doBuildStep(nix::ref<nix::Store> destStore,
MachineReservation::ptr & reservation, MachineReservation::ptr & reservation,
Connection & conn,
std::shared_ptr<ActiveStep> activeStep); std::shared_ptr<ActiveStep> activeStep);
void buildRemote(nix::ref<nix::Store> destStore, void buildRemote(nix::ref<nix::Store> destStore,
@ -622,8 +619,6 @@ private:
void addRoot(const nix::StorePath & storePath); void addRoot(const nix::StorePath & storePath);
void runMetricsExporter();
public: public:
void showStatus(); void showStatus();

View file

@ -23,23 +23,37 @@ sub all : Chained('get_builds') PathPart {
$c->stash->{total} = $c->stash->{allBuilds}->search({finished => 1})->count $c->stash->{total} = $c->stash->{allBuilds}->search({finished => 1})->count
unless defined $c->stash->{total}; unless defined $c->stash->{total};
$c->stash->{builds} = [ $c->stash->{allBuilds}->search( my $extra = {
{ finished => 1 }, order_by => "stoptime DESC"
{ order_by => "stoptime DESC"
, columns => [@buildListColumns] , columns => [@buildListColumns]
, rows => $resultsPerPage , rows => $resultsPerPage
, page => $page }) ]; , page => $page };
my $criteria = { finished => 1 };
unless ($c->user_exists) {
$extra->{join} = {"jobset" => "project"};
$criteria->{"project.private"} = 0;
}
$c->stash->{builds} = [ $c->stash->{allBuilds}->search(
$criteria,
$extra
) ];
} }
sub nix : Chained('get_builds') PathPart('channel/latest') CaptureArgs(0) { sub nix : Chained('get_builds') PathPart('channel/latest') CaptureArgs(0) {
my ($self, $c) = @_; my ($self, $c) = @_;
my $private = $c->user_exists ? [1,0] : [0];
$c->stash->{channelName} = $c->stash->{channelBaseName} . "-latest"; $c->stash->{channelName} = $c->stash->{channelBaseName} . "-latest";
$c->stash->{channelBuilds} = $c->stash->{latestSucceeded} $c->stash->{channelBuilds} = $c->stash->{latestSucceeded}
->search_literal("exists (select 1 from buildproducts where build = me.id and type = 'nix-build')") ->search_literal("exists (select 1 from buildproducts where build = me.id and type = 'nix-build')")
->search({}, { columns => [@buildListColumns, 'drvpath', 'description', 'homepage'] ->search({"project.private" => {-in => $private}},
, join => ["buildoutputs"] { columns => [@buildListColumns, 'drvpath', 'description', 'homepage']
, join => ["buildoutputs", {"jobset" => "project"}]
, order_by => ["me.id", "buildoutputs.name"] , order_by => ["me.id", "buildoutputs.name"]
, '+select' => ['buildoutputs.path', 'buildoutputs.name'], '+as' => ['outpath', 'outname'] }); , '+select' => ['buildoutputs.path', 'buildoutputs.name'], '+as' => ['outpath', 'outname'] });
} }

View file

@ -49,6 +49,8 @@ sub latestbuilds : Chained('api') PathPart('latestbuilds') Args(0) {
error($c, "Parameter not defined!") if !defined $nr; error($c, "Parameter not defined!") if !defined $nr;
my $project = $c->request->params->{project}; my $project = $c->request->params->{project};
checkProjectVisibleForGuest($c, $c->stash->{project});
my $jobset = $c->request->params->{jobset}; my $jobset = $c->request->params->{jobset};
my $job = $c->request->params->{job}; my $job = $c->request->params->{job};
my $system = $c->request->params->{system}; my $system = $c->request->params->{system};
@ -106,6 +108,8 @@ sub jobsets : Chained('api') PathPart('jobsets') Args(0) {
my $project = $c->model('DB::Projects')->find($projectName) my $project = $c->model('DB::Projects')->find($projectName)
or notFound($c, "Project $projectName doesn't exist."); or notFound($c, "Project $projectName doesn't exist.");
checkProjectVisibleForGuest($c, $project);
my @jobsets = jobsetOverview($c, $project); my @jobsets = jobsetOverview($c, $project);
my @list; my @list;
@ -124,7 +128,17 @@ sub queue : Chained('api') PathPart('queue') Args(0) {
my $nr = $c->request->params->{nr}; my $nr = $c->request->params->{nr};
error($c, "Parameter not defined!") if !defined $nr; error($c, "Parameter not defined!") if !defined $nr;
my @builds = $c->model('DB::Builds')->search({finished => 0}, {rows => $nr, order_by => ["priority DESC", "id"]}); my $criteria = {finished => 0};
my $extra = {
rows => $nr,
order_by => ["priority DESC", "id"]
};
unless ($c->user_exists) {
$criteria->{"project.private"} = 0;
$extra->{join} = {"jobset" => "project"};
}
my @builds = $c->model('DB::Builds')->search($criteria, $extra);
my @list; my @list;
push @list, buildToHash($_) foreach @builds; push @list, buildToHash($_) foreach @builds;
@ -198,6 +212,16 @@ sub scmdiff : Path('/api/scmdiff') Args(0) {
my $rev1 = $c->request->params->{rev1}; my $rev1 = $c->request->params->{rev1};
my $rev2 = $c->request->params->{rev2}; my $rev2 = $c->request->params->{rev2};
unless ($c->user_exists) {
my $search = $c->model('DB::JobsetEvalInputs')->search(
{ "project.private" => 0, "me.uri" => $uri },
{ join => { "eval" => { jobset => "project" } } }
);
if ($search == 0) {
die("invalid revisions: [$rev1] [$rev2]")
}
}
die("invalid revisions: [$rev1] [$rev2]") if $rev1 !~ m/^[a-zA-Z0-9_.]+$/ || $rev2 !~ m/^[a-zA-Z0-9_.]+$/; die("invalid revisions: [$rev1] [$rev2]") if $rev1 !~ m/^[a-zA-Z0-9_.]+$/ || $rev2 !~ m/^[a-zA-Z0-9_.]+$/;
# FIXME: injection danger. # FIXME: injection danger.
@ -242,23 +266,35 @@ sub push : Chained('api') PathPart('push') Args(0) {
$c->{stash}->{json}->{jobsetsTriggered} = []; $c->{stash}->{json}->{jobsetsTriggered} = [];
my $force = exists $c->request->query_params->{force}; my $force = exists $c->request->query_params->{force};
my @jobsets = split /,/, ($c->request->query_params->{jobsets} // ""); my @jobsetNames = split /,/, ($c->request->query_params->{jobsets} // "");
foreach my $s (@jobsets) { my @jobsets;
foreach my $s (@jobsetNames) {
my ($p, $j) = parseJobsetName($s); my ($p, $j) = parseJobsetName($s);
my $jobset = $c->model('DB::Jobsets')->find($p, $j); my $jobset = $c->model('DB::Jobsets')->find($p, $j);
next unless defined $jobset && ($force || ($jobset->project->enabled && $jobset->enabled)); push @jobsets, $jobset if defined $jobset;
triggerJobset($self, $c, $jobset, $force);
} }
my @repos = split /,/, ($c->request->query_params->{repos} // ""); my @repos = split /,/, ($c->request->query_params->{repos} // "");
foreach my $r (@repos) { foreach my $r (@repos) {
triggerJobset($self, $c, $_, $force) foreach $c->model('DB::Jobsets')->search( foreach ($c->model('DB::Jobsets')->search(
{ 'project.enabled' => 1, 'me.enabled' => 1 }, { 'project.enabled' => 1, 'me.enabled' => 1 },
{ {
join => 'project', join => 'project',
where => \ [ 'exists (select 1 from JobsetInputAlts where project = me.project and jobset = me.name and value = ?)', [ 'value', $r ] ], where => \ [ 'exists (select 1 from JobsetInputAlts where project = me.project and jobset = me.name and value = ?)', [ 'value', $r ] ],
order_by => 'me.id DESC' order_by => 'me.id DESC'
}); })) {
push @jobsets, $_;
}
}
foreach my $jobset (@jobsets) {
requireRestartPrivileges($c, $jobset->project);
}
foreach my $jobset (@jobsets) {
next unless defined $jobset && ($force || ($jobset->project->enabled && $jobset->enabled));
triggerJobset($self, $c, $jobset, $force);
} }
$self->status_ok( $self->status_ok(
@ -273,7 +309,7 @@ sub push_github : Chained('api') PathPart('push-github') Args(0) {
$c->{stash}->{json}->{jobsetsTriggered} = []; $c->{stash}->{json}->{jobsetsTriggered} = [];
my $in = $c->request->{data}; my $in = $c->request->{data};
my $owner = $in->{repository}->{owner}->{name} or die; my $owner = $in->{repository}->{owner}->{login} or die;
my $repo = $in->{repository}->{name} or die; my $repo = $in->{repository}->{name} or die;
print STDERR "got push from GitHub repository $owner/$repo\n"; print STDERR "got push from GitHub repository $owner/$repo\n";
@ -285,6 +321,23 @@ sub push_github : Chained('api') PathPart('push-github') Args(0) {
$c->response->body(""); $c->response->body("");
} }
sub push_gitea : Chained('api') PathPart('push-gitea') Args(0) {
my ($self, $c) = @_;
$c->{stash}->{json}->{jobsetsTriggered} = [];
my $in = $c->request->{data};
my $url = $in->{repository}->{clone_url} or die;
$url =~ s/.git$//;
print STDERR "got push from Gitea repository $url\n";
triggerJobset($self, $c, $_, 0) foreach $c->model('DB::Jobsets')->search(
{ 'project.enabled' => 1, 'me.enabled' => 1 },
{ join => 'project'
, where => \ [ 'me.flake like ? or exists (select 1 from JobsetInputAlts where project = me.project and jobset = me.name and value like ?)', [ 'flake', "%$url%"], [ 'value', "%$url%" ] ]
});
$c->response->body("");
}
1; 1;

View file

@ -39,6 +39,9 @@ sub buildChain :Chained('/') :PathPart('build') :CaptureArgs(1) {
$c->stash->{project} = $c->stash->{build}->project; $c->stash->{project} = $c->stash->{build}->project;
$c->stash->{jobset} = $c->stash->{build}->jobset; $c->stash->{jobset} = $c->stash->{build}->jobset;
$c->stash->{job} = $c->stash->{build}->job; $c->stash->{job} = $c->stash->{build}->job;
checkProjectVisibleForGuest($c, $c->stash->{project});
$c->stash->{runcommandlogs} = [$c->stash->{build}->runcommandlogs->search({}, {order_by => ["id DESC"]})]; $c->stash->{runcommandlogs} = [$c->stash->{build}->runcommandlogs->search({}, {order_by => ["id DESC"]})];
$c->stash->{runcommandlogProblem} = undef; $c->stash->{runcommandlogProblem} = undef;

View file

@ -3,6 +3,7 @@ package Hydra::Controller::Channel;
use strict; use strict;
use warnings; use warnings;
use base 'Hydra::Base::Controller::REST'; use base 'Hydra::Base::Controller::REST';
use Hydra::Helper::CatalystUtils;
sub channel : Chained('/') PathPart('channel/custom') CaptureArgs(3) { sub channel : Chained('/') PathPart('channel/custom') CaptureArgs(3) {
@ -10,6 +11,8 @@ sub channel : Chained('/') PathPart('channel/custom') CaptureArgs(3) {
$c->stash->{project} = $c->model('DB::Projects')->find($projectName); $c->stash->{project} = $c->model('DB::Projects')->find($projectName);
checkProjectVisibleForGuest($c, $c->stash->{project});
notFound($c, "Project $projectName doesn't exist.") notFound($c, "Project $projectName doesn't exist.")
if !$c->stash->{project}; if !$c->stash->{project};

View file

@ -27,6 +27,8 @@ sub job : Chained('/') PathPart('job') CaptureArgs(3) {
$c->stash->{job} = $jobName; $c->stash->{job} = $jobName;
$c->stash->{project} = $c->stash->{jobset}->project; $c->stash->{project} = $c->stash->{jobset}->project;
checkProjectVisibleForGuest($c, $c->stash->{project});
} }
sub shield :Chained('job') PathPart('shield') Args(0) { sub shield :Chained('job') PathPart('shield') Args(0) {

View file

@ -17,6 +17,8 @@ sub jobsetChain :Chained('/') :PathPart('jobset') :CaptureArgs(2) {
$c->stash->{project} = $project; $c->stash->{project} = $project;
checkProjectVisibleForGuest($c, $c->stash->{project});
$c->stash->{jobset} = $project->jobsets->find({ name => $jobsetName }); $c->stash->{jobset} = $project->jobsets->find({ name => $jobsetName });
if (!$c->stash->{jobset} && !($c->action->name eq "jobset" and $c->request->method eq "PUT")) { if (!$c->stash->{jobset} && !($c->action->name eq "jobset" and $c->request->method eq "PUT")) {
@ -294,26 +296,24 @@ sub updateJobset {
# Set the inputs of this jobset. # Set the inputs of this jobset.
$jobset->jobsetinputs->delete; $jobset->jobsetinputs->delete;
if ($type == 0) { foreach my $name (keys %{$c->stash->{params}->{inputs}}) {
foreach my $name (keys %{$c->stash->{params}->{inputs}}) { my $inputData = $c->stash->{params}->{inputs}->{$name};
my $inputData = $c->stash->{params}->{inputs}->{$name}; my $type = $inputData->{type};
my $type = $inputData->{type}; my $value = $inputData->{value};
my $value = $inputData->{value}; my $emailresponsible = defined $inputData->{emailresponsible} ? 1 : 0;
my $emailresponsible = defined $inputData->{emailresponsible} ? 1 : 0; my $types = knownInputTypes($c);
my $types = knownInputTypes($c);
badRequest($c, "Invalid input name $name.") unless $name =~ /^[[:alpha:]][\w-]*$/; badRequest($c, "Invalid input name $name.") unless $name =~ /^[[:alpha:]][\w-]*$/;
badRequest($c, "Invalid input type $type; valid types: $types.") unless defined $c->stash->{inputTypes}->{$type}; badRequest($c, "Invalid input type $type; valid types: $types.") unless defined $c->stash->{inputTypes}->{$type};
my $input = $jobset->jobsetinputs->create( my $input = $jobset->jobsetinputs->create(
{ name => $name, { name => $name,
type => $type, type => $type,
emailresponsible => $emailresponsible emailresponsible => $emailresponsible
}); });
$value = checkInputValue($c, $name, $type, $value); $value = checkInputValue($c, $name, $type, $value);
$input->jobsetinputalts->create({altnr => 0, value => $value}); $input->jobsetinputalts->create({altnr => 0, value => $value});
}
} }
} }

View file

@ -19,6 +19,8 @@ sub evalChain : Chained('/') PathPart('eval') CaptureArgs(1) {
$c->stash->{eval} = $eval; $c->stash->{eval} = $eval;
$c->stash->{jobset} = $eval->jobset; $c->stash->{jobset} = $eval->jobset;
$c->stash->{project} = $eval->jobset->project; $c->stash->{project} = $eval->jobset->project;
checkProjectVisibleForGuest($c, $c->stash->{project});
} }

View file

@ -16,6 +16,8 @@ sub projectChain :Chained('/') :PathPart('project') :CaptureArgs(1) {
$c->stash->{project} = $c->model('DB::Projects')->find($projectName); $c->stash->{project} = $c->model('DB::Projects')->find($projectName);
checkProjectVisibleForGuest($c, $c->stash->{project});
$c->stash->{isProjectOwner} = !$isCreate && isProjectOwner($c, $c->stash->{project}); $c->stash->{isProjectOwner} = !$isCreate && isProjectOwner($c, $c->stash->{project});
notFound($c, "Project $projectName doesn't exist.") notFound($c, "Project $projectName doesn't exist.")
@ -161,6 +163,7 @@ sub updateProject {
, homepage => trim($c->stash->{params}->{homepage}) , homepage => trim($c->stash->{params}->{homepage})
, enabled => defined $c->stash->{params}->{enabled} ? 1 : 0 , enabled => defined $c->stash->{params}->{enabled} ? 1 : 0
, hidden => defined $c->stash->{params}->{visible} ? 0 : 1 , hidden => defined $c->stash->{params}->{visible} ? 0 : 1
, private => defined $c->stash->{params}->{private} ? 1 : 0
, owner => $owner , owner => $owner
, enable_dynamic_run_command => $enable_dynamic_run_command , enable_dynamic_run_command => $enable_dynamic_run_command
, declfile => trim($c->stash->{params}->{declarative}->{file}) , declfile => trim($c->stash->{params}->{declarative}->{file})

View file

@ -35,6 +35,7 @@ sub noLoginNeeded {
return $whitelisted || return $whitelisted ||
$c->request->path eq "api/push-github" || $c->request->path eq "api/push-github" ||
$c->request->path eq "api/push-gitea" ||
$c->request->path eq "google-login" || $c->request->path eq "google-login" ||
$c->request->path eq "github-redirect" || $c->request->path eq "github-redirect" ||
$c->request->path eq "github-login" || $c->request->path eq "github-login" ||
@ -80,7 +81,7 @@ sub begin :Private {
$_->supportedInputTypes($c->stash->{inputTypes}) foreach @{$c->hydra_plugins}; $_->supportedInputTypes($c->stash->{inputTypes}) foreach @{$c->hydra_plugins};
# XSRF protection: require POST requests to have the same origin. # XSRF protection: require POST requests to have the same origin.
if ($c->req->method eq "POST" && $c->req->path ne "api/push-github") { if ($c->req->method eq "POST" && $c->req->path ne "api/push-github" && $c->req->path ne "api/push-gitea") {
my $referer = $c->req->header('Referer'); my $referer = $c->req->header('Referer');
$referer //= $c->req->header('Origin'); $referer //= $c->req->header('Origin');
my $base = $c->req->base; my $base = $c->req->base;
@ -109,7 +110,13 @@ sub deserialize :ActionClass('Deserialize') { }
sub index :Path :Args(0) { sub index :Path :Args(0) {
my ($self, $c) = @_; my ($self, $c) = @_;
$c->stash->{template} = 'overview.tt'; $c->stash->{template} = 'overview.tt';
$c->stash->{projects} = [$c->model('DB::Projects')->search({}, {order_by => ['enabled DESC', 'name']})];
my $includePrivate = $c->user_exists ? [1,0] : [0];
$c->stash->{projects} = [$c->model('DB::Projects')->search(
{private => {-in => $includePrivate}},
{order_by => ['enabled DESC', 'name']}
)];
$c->stash->{newsItems} = [$c->model('DB::NewsItems')->search({}, { order_by => ['createtime DESC'], rows => 5 })]; $c->stash->{newsItems} = [$c->model('DB::NewsItems')->search({}, { order_by => ['createtime DESC'], rows => 5 })];
$self->status_ok($c, $self->status_ok($c,
entity => $c->stash->{projects} entity => $c->stash->{projects}
@ -121,15 +128,23 @@ sub queue :Local :Args(0) :ActionClass('REST') { }
sub queue_GET { sub queue_GET {
my ($self, $c) = @_; my ($self, $c) = @_;
my $criteria = {finished => 0};
my $extra = {
columns => [@buildListColumns],
order_by => ["priority DESC", "id"]
};
unless ($c->user_exists) {
$criteria->{"project.private"} = 0;
$extra->{join} = {"jobset" => "project"};
}
$c->stash->{template} = 'queue.tt'; $c->stash->{template} = 'queue.tt';
$c->stash->{flashMsg} //= $c->flash->{buildMsg}; $c->stash->{flashMsg} //= $c->flash->{buildMsg};
$self->status_ok( $self->status_ok(
$c, $c,
entity => [$c->model('DB::Builds')->search( entity => [$c->model('DB::Builds')->search(
{ finished => 0 }, $criteria,
{ order_by => ["globalpriority desc", "id"], $extra
, columns => [@buildListColumns] )]
})]
); );
} }
@ -138,10 +153,15 @@ sub queue_summary :Local :Path('queue-summary') :Args(0) {
my ($self, $c) = @_; my ($self, $c) = @_;
$c->stash->{template} = 'queue-summary.tt'; $c->stash->{template} = 'queue-summary.tt';
my $extra = " where ";
unless ($c->user_exists) {
$extra = "inner join Projects p on p.name = project where p.private = 0 and ";
}
$c->stash->{queued} = dbh($c)->selectall_arrayref( $c->stash->{queued} = dbh($c)->selectall_arrayref(
"select jobsets.project as project, jobsets.name as jobset, count(*) as queued, min(timestamp) as oldest, max(timestamp) as newest from Builds " . "select jobsets.project as project, jobsets.name as jobset, count(*) as queued, min(timestamp) as oldest, max(timestamp) as newest from Builds " .
"join Jobsets jobsets on jobsets.id = builds.jobset_id " . "join Jobsets jobsets on jobsets.id = builds.jobset_id " .
"where finished = 0 group by jobsets.project, jobsets.name order by queued desc", "$extra finished = 0 group by jobsets.project, jobsets.name order by queued desc",
{ Slice => {} }); { Slice => {} });
$c->stash->{systems} = dbh($c)->selectall_arrayref( $c->stash->{systems} = dbh($c)->selectall_arrayref(
@ -154,12 +174,19 @@ sub status :Local :Args(0) :ActionClass('REST') { }
sub status_GET { sub status_GET {
my ($self, $c) = @_; my ($self, $c) = @_;
my $criteria = { "buildsteps.busy" => { '!=', 0 } };
my $join = ["buildsteps"];
unless ($c->user_exists) {
$criteria->{"project.private"} = 0;
push @{$join}, {"jobset" => "project"};
}
$self->status_ok( $self->status_ok(
$c, $c,
entity => [$c->model('DB::Builds')->search( entity => [$c->model('DB::Builds')->search(
{ "buildsteps.busy" => { '!=', 0 } }, $criteria,
{ order_by => ["globalpriority DESC", "id"], { order_by => ["globalpriority DESC", "id"],
join => "buildsteps", join => $join,
columns => [@buildListColumns, 'buildsteps.drvpath', 'buildsteps.type'] columns => [@buildListColumns, 'buildsteps.drvpath', 'buildsteps.type']
})] })]
); );
@ -201,13 +228,18 @@ sub machines :Local Args(0) {
} }
} }
my $extra = "where";
unless ($c->user_exists) {
$extra = "inner join Projects p on p.name = jobsets.project where p.private = 0 and ";
}
$c->stash->{machines} = $machines; $c->stash->{machines} = $machines;
$c->stash->{steps} = dbh($c)->selectall_arrayref( $c->stash->{steps} = dbh($c)->selectall_arrayref(
"select build, stepnr, s.system as system, s.drvpath as drvpath, machine, s.starttime as starttime, jobsets.project as project, jobsets.name as jobset, job, s.busy as busy " . "select build, stepnr, s.system as system, s.drvpath as drvpath, machine, s.starttime as starttime, jobsets.project as project, jobsets.name as jobset, job, s.busy as busy " .
"from BuildSteps s " . "from BuildSteps s " .
"join Builds b on s.build = b.id " . "join Builds b on s.build = b.id " .
"join Jobsets jobsets on jobsets.id = b.jobset_id " . "join Jobsets jobsets on jobsets.id = b.jobset_id " .
"where busy != 0 order by machine, stepnr", "$extra busy != 0 order by machine, stepnr",
{ Slice => {} }); { Slice => {} });
$c->stash->{template} = 'machine-status.tt'; $c->stash->{template} = 'machine-status.tt';
$self->status_ok($c, entity => $c->stash->{machines}); $self->status_ok($c, entity => $c->stash->{machines});
@ -449,16 +481,28 @@ sub steps :Local Args(0) {
my $resultsPerPage = 20; my $resultsPerPage = 20;
my $criteria = {
"me.starttime" => { '!=', undef },
"me.stoptime" => { '!=', undef }
};
my $extra = {
order_by => [ "me.stoptime desc" ],
rows => $resultsPerPage,
offset => ($page - 1) * $resultsPerPage,
};
unless ($c->user_exists) {
$criteria->{"project.private"} = 0;
$extra->{join} = [{"build" => {"jobset" => "project"}}];
}
$c->stash->{page} = $page; $c->stash->{page} = $page;
$c->stash->{resultsPerPage} = $resultsPerPage; $c->stash->{resultsPerPage} = $resultsPerPage;
$c->stash->{steps} = [ $c->model('DB::BuildSteps')->search( $c->stash->{steps} = [ $c->model('DB::BuildSteps')->search(
{ starttime => { '!=', undef }, $criteria,
stoptime => { '!=', undef } $extra
}, ) ];
{ order_by => [ "stoptime desc" ],
rows => $resultsPerPage,
offset => ($page - 1) * $resultsPerPage
}) ];
$c->stash->{total} = approxTableSize($c, "IndexBuildStepsOnStopTime"); $c->stash->{total} = approxTableSize($c, "IndexBuildStepsOnStopTime");
} }
@ -479,28 +523,58 @@ sub search :Local Args(0) {
$c->model('DB')->schema->txn_do(sub { $c->model('DB')->schema->txn_do(sub {
$c->model('DB')->schema->storage->dbh->do("SET LOCAL statement_timeout = 20000"); $c->model('DB')->schema->storage->dbh->do("SET LOCAL statement_timeout = 20000");
$c->stash->{projects} = [ $c->model('DB::Projects')->search(
{ -and => my $projectCriteria = {
-and =>
[ { -or => [ name => { ilike => "%$query%" }, displayName => { ilike => "%$query%" }, description => { ilike => "%$query%" } ] } [ { -or => [ name => { ilike => "%$query%" }, displayName => { ilike => "%$query%" }, description => { ilike => "%$query%" } ] }
, { hidden => 0 } , { hidden => 0 }
] ]
}, };
{ order_by => ["name"] } ) ];
$c->stash->{jobsets} = [ $c->model('DB::Jobsets')->search( my $jobsetCriteria = {
{ -and => -and =>
[ { -or => [ "me.name" => { ilike => "%$query%" }, "me.description" => { ilike => "%$query%" } ] } [ { -or => [ "me.name" => { ilike => "%$query%" }, "me.description" => { ilike => "%$query%" } ] }
, { "project.hidden" => 0, "me.hidden" => 0 } , { "project.hidden" => 0, "me.hidden" => 0 }
] ]
}, };
{ order_by => ["project", "name"], join => ["project"] } ) ];
$c->stash->{jobs} = [ $c->model('DB::Builds')->search( my $buildCriteria = {
{ "job" => { ilike => "%$query%" } "job" => { ilike => "%$query%" }
, "project.hidden" => 0 , "project.hidden" => 0
, "jobset.hidden" => 0 , "jobset.hidden" => 0
, iscurrent => 1 , iscurrent => 1
}, };
my $buildSearchExtra = {
order_by => ["id desc"]
, rows => $c->stash->{limit}, join => []
};
my $outCriteria = {
"buildoutputs.path" => { ilike => "%$query%" }
};
my $drvCriteria = { "drvpath" => { ilike => "%$query%" } };
unless ($c->user_exists) {
$projectCriteria->{private} = 0;
$jobsetCriteria->{"project.private"} = 0;
$buildCriteria->{"project.private"} = 0;
push @{$buildSearchExtra->{join}}, {"jobset" => "project"};
$outCriteria->{"project.private"} = 0;
$drvCriteria->{"project.private"} = 0;
}
$c->stash->{projects} = [ $c->model('DB::Projects')->search(
$projectCriteria,
{ order_by => ["name"] } ) ];
$c->stash->{jobsets} = [ $c->model('DB::Jobsets')->search(
$jobsetCriteria,
{ order_by => ["project", "name"], join => ["project"] } ) ];
$c->stash->{jobs} = [ $c->model('DB::Builds')->search(
$buildCriteria,
{ {
order_by => ["jobset.project", "jobset.name", "job"], order_by => ["jobset.project", "jobset.name", "job"],
join => { "jobset" => "project" }, join => { "jobset" => "project" },
@ -509,17 +583,16 @@ sub search :Local Args(0) {
]; ];
# Perform build search in separate queries to prevent seq scan on buildoutputs table. # Perform build search in separate queries to prevent seq scan on buildoutputs table.
my $outExtra = $buildSearchExtra;
push @{$outExtra->{join}}, "buildoutputs";
$c->stash->{builds} = [ $c->model('DB::Builds')->search( $c->stash->{builds} = [ $c->model('DB::Builds')->search(
{ "buildoutputs.path" => { ilike => "%$query%" } }, $outCriteria,
{ order_by => ["id desc"], join => ["buildoutputs"] $outExtra
, rows => $c->stash->{limit} ) ];
} ) ];
$c->stash->{buildsdrv} = [ $c->model('DB::Builds')->search( $c->stash->{buildsdrv} = [ $c->model('DB::Builds')->search(
{ "drvpath" => { ilike => "%$query%" } }, $drvCriteria,
{ order_by => ["id desc"] $buildSearchExtra ) ];
, rows => $c->stash->{limit}
} ) ];
$c->stash->{resource} = { projects => $c->stash->{projects}, $c->stash->{resource} = { projects => $c->stash->{projects},
jobsets => $c->stash->{jobsets}, jobsets => $c->stash->{jobsets},

View file

@ -29,6 +29,7 @@ our @EXPORT = qw(
approxTableSize approxTableSize
requireLocalStore requireLocalStore
dbh dbh
checkProjectVisibleForGuest
); );
@ -256,6 +257,14 @@ sub requireProjectOwner {
unless isProjectOwner($c, $project); unless isProjectOwner($c, $project);
} }
sub checkProjectVisibleForGuest {
my ($c, $project) = @_;
if (defined $project && $project->private == 1 && !$c->user_exists) {
my $projectName = $project->name;
notFound($c, "Project $projectName not found!");
}
}
sub isAdmin { sub isAdmin {
my ($c) = @_; my ($c) = @_;

View file

@ -182,17 +182,34 @@ sub findLog {
my ($c, $drvPath, @outPaths) = @_; my ($c, $drvPath, @outPaths) = @_;
if (defined $drvPath) { if (defined $drvPath) {
unless ($c->user_exists) {
my $existsForGuest = $c->model('DB::BuildSteps')->search(
{"me.drvpath" => $drvPath, "project.private" => 0},
{join => {build => {"jobset" => "project"}}}
);
if ($existsForGuest == 0) {
notFound($c, "Resource not found");
}
}
my $logPath = getDrvLogPath($drvPath); my $logPath = getDrvLogPath($drvPath);
return $logPath if defined $logPath; return $logPath if defined $logPath;
} }
return undef if scalar @outPaths == 0; return undef if scalar @outPaths == 0;
my $join = ["buildstepoutputs"];
my $criteria = { path => { -in => [@outPaths] } };
unless ($c->user_exists) {
push @{$join}, {"build" => {jobset => "project"}};
$criteria->{"project.private"} = 0;
}
my @steps = $c->model('DB::BuildSteps')->search( my @steps = $c->model('DB::BuildSteps')->search(
{ path => { -in => [@outPaths] } }, $criteria,
{ select => ["drvpath"] { select => ["drvpath"]
, distinct => 1 , distinct => 1
, join => "buildstepoutputs" , join => $join
}); });
foreach my $step (@steps) { foreach my $step (@steps) {
@ -285,9 +302,19 @@ sub getEvals {
my $me = $evals_result_set->current_source_alias; my $me = $evals_result_set->current_source_alias;
my @evals = $evals_result_set->search( my $criteria = { hasnewbuilds => 1 };
{ hasnewbuilds => 1 }, my $extra = {
{ order_by => "$me.id DESC", rows => $rows, offset => $offset }); order_by => "$me.id DESC",
rows => $rows,
offset => $offset
};
unless ($c->user_exists) {
$extra->{join} = {"jobset" => "project"};
$criteria->{"project.private"} = 0;
}
my @evals = $evals_result_set->search($criteria, $extra);
my @res = (); my @res = ();
my $cache = {}; my $cache = {};

View file

@ -62,6 +62,12 @@ __PACKAGE__->table("projects");
default_value: 0 default_value: 0
is_nullable: 0 is_nullable: 0
=head2 private
data_type: 'integer'
default_value: 0
is_nullable: 0
=head2 owner =head2 owner
data_type: 'text' data_type: 'text'
@ -107,6 +113,8 @@ __PACKAGE__->add_columns(
{ data_type => "integer", default_value => 1, is_nullable => 0 }, { data_type => "integer", default_value => 1, is_nullable => 0 },
"hidden", "hidden",
{ data_type => "integer", default_value => 0, is_nullable => 0 }, { data_type => "integer", default_value => 0, is_nullable => 0 },
"private",
{ data_type => "integer", default_value => 0, is_nullable => 0 },
"owner", "owner",
{ data_type => "text", is_foreign_key => 1, is_nullable => 0 }, { data_type => "text", is_foreign_key => 1, is_nullable => 0 },
"homepage", "homepage",
@ -236,8 +244,8 @@ Composing rels: L</projectmembers> -> username
__PACKAGE__->many_to_many("usernames", "projectmembers", "username"); __PACKAGE__->many_to_many("usernames", "projectmembers", "username");
# Created by DBIx::Class::Schema::Loader v0.07049 @ 2022-01-24 14:20:32 # Created by DBIx::Class::Schema::Loader v0.07049 @ 2022-11-22 12:51:02
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:PtXDyT8Pc7LYhhdEG39EKQ # DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:ppyLpFU2fZASFANhD7vUgg
use JSON::MaybeXS; use JSON::MaybeXS;
@ -267,6 +275,7 @@ sub as_json {
"enabled" => $self->get_column("enabled") ? JSON::MaybeXS::true : JSON::MaybeXS::false, "enabled" => $self->get_column("enabled") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
"enable_dynamic_run_command" => $self->get_column("enable_dynamic_run_command") ? JSON::MaybeXS::true : JSON::MaybeXS::false, "enable_dynamic_run_command" => $self->get_column("enable_dynamic_run_command") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
"hidden" => $self->get_column("hidden") ? JSON::MaybeXS::true : JSON::MaybeXS::false, "hidden" => $self->get_column("hidden") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
"private" => $self->get_column("private") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
"jobsets" => [ map { $_->name } $self->jobsets ] "jobsets" => [ map { $_->name } $self->jobsets ]
); );

View file

@ -42,7 +42,15 @@
[% END %] [% END %]
[% BLOCK renderJobsetInputs %] [% BLOCK renderJobsetInputs %]
<table class="table table-striped table-condensed show-on-legacy"> <div class="card show-on-flake border-danger">
<div class="text-danger card-body">
<h5 class="card-title">Jobset Inputs don't take any effect for flakes</h5>
<p class="card-text">
These are only available to configure Hydra plugins.
</p>
</div>
</div>
<table class="table table-striped table-condensed">
<thead> <thead>
<tr><th></th><th>Input name</th><th>Type</th><th style="width: 50%">Value</th><th>Notify committers</th></tr> <tr><th></th><th>Input name</th><th>Type</th><th style="width: 50%">Value</th><th>Notify committers</th></tr>
</thead> </thead>

View file

@ -17,6 +17,13 @@
</div> </div>
</div> </div>
<div class="form-group row">
<label class="col-sm-3" for="editprojectprivate">Private</label>
<div class="col-sm-9">
<input type="checkbox" id="editprojectprivate" name="private" [% IF project.private %] checked="checked" [% END %]/>
</div>
</div>
<div class="form-group row"> <div class="form-group row">
<label class="col-sm-3" for="editprojectidentifier">Identifier</label> <label class="col-sm-3" for="editprojectidentifier">Identifier</label>
<div class="col-sm-9"> <div class="col-sm-9">

View file

@ -54,7 +54,7 @@
<tbody> <tbody>
[% FOREACH p IN projects %] [% FOREACH p IN projects %]
<tr class="project [% IF !p.enabled %]disabled-project[% END %]"> <tr class="project [% IF !p.enabled %]disabled-project[% END %]">
<td><span class="[% IF !p.enabled %]disabled-project[% END %] [%+ IF p.hidden %]hidden-project[% END %]">[% INCLUDE renderProjectName project=p.name inRow=1 %]</span></td> <td>[% IF p.private %]&#128274;[% END %] <span class="[% IF !p.enabled %]disabled-project[% END %] [%+ IF p.hidden %]hidden-project[% END %]">[% INCLUDE renderProjectName project=p.name inRow=1 %]</span></td>
<td>[% HTML.escape(p.displayname) %]</td> <td>[% HTML.escape(p.displayname) %]</td>
<td>[% WRAPPER maybeLink uri=p.homepage %][% HTML.escape(p.description) %][% END %]</td> <td>[% WRAPPER maybeLink uri=p.homepage %][% HTML.escape(p.description) %][% END %]</td>
</tr> </tr>

View file

@ -1,4 +1,9 @@
[% WRAPPER layout.tt title="Project $project.name" %] [% IF project.private %]
[% lock = ' &#128274;' %]
[% ELSE %]
[% lock = '' %]
[% END %]
[% WRAPPER layout.tt titleHTML="Project $project.name$lock" title="Project $project.name" %]
[% PROCESS common.tt %] [% PROCESS common.tt %]
<ul class="nav nav-tabs"> <ul class="nav nav-tabs">

View file

@ -44,6 +44,7 @@ create table Projects (
description text, description text,
enabled integer not null default 1, enabled integer not null default 1,
hidden integer not null default 0, hidden integer not null default 0,
private integer not null default 0,
owner text not null, owner text not null,
homepage text, -- URL for the project homepage text, -- URL for the project
declfile text, -- File containing declarative jobset specification declfile text, -- File containing declarative jobset specification

View file

@ -1,5 +1,3 @@
-- Records of RunCommand executions -- Records of RunCommand executions
-- --
-- The intended flow is: -- The intended flow is:

View file

@ -35,6 +35,17 @@ my $queuedBuilds = $ctx->makeAndEvaluateJobset(
build => 0 build => 0
); );
# Login and save cookie for future requests
my $req = request(POST '/login',
Referer => 'http://localhost/',
Content => {
username => 'root',
password => 'rootPassword'
}
);
is($req->code, 302, "Logging in gets a 302");
my $cookie = $req->header("set-cookie");
subtest "/api/queue" => sub { subtest "/api/queue" => sub {
my $response = request(GET '/api/queue?nr=1'); my $response = request(GET '/api/queue?nr=1');
ok($response->is_success, "The API enpdoint showing the queue returns 200."); ok($response->is_success, "The API enpdoint showing the queue returns 200.");
@ -102,7 +113,7 @@ subtest "/api/nrbuilds" => sub {
}; };
subtest "/api/push" => sub { subtest "/api/push" => sub {
subtest "with a specific jobset" => sub { subtest "without authentication" => sub {
my $build = $finishedBuilds->{"one_job"}; my $build = $finishedBuilds->{"one_job"};
my $jobset = $build->jobset; my $jobset = $build->jobset;
my $projectName = $jobset->project->name; my $projectName = $jobset->project->name;
@ -110,6 +121,18 @@ subtest "/api/push" => sub {
is($jobset->forceeval, undef, "The existing jobset is not set to be forced to eval"); is($jobset->forceeval, undef, "The existing jobset is not set to be forced to eval");
my $response = request(GET "/api/push?jobsets=$projectName:$jobsetName&force=1"); my $response = request(GET "/api/push?jobsets=$projectName:$jobsetName&force=1");
is($response->code, 403, "The API enpdoint for triggering jobsets requires authentication.");
};
subtest "with a specific jobset" => sub {
my $build = $finishedBuilds->{"one_job"};
my $jobset = $build->jobset;
my $projectName = $jobset->project->name;
my $jobsetName = $jobset->name;
is($jobset->forceeval, undef, "The existing jobset is not set to be forced to eval");
my $response = request(GET "/api/push?jobsets=$projectName:$jobsetName&force=1",
Cookie => $cookie);
ok($response->is_success, "The API enpdoint for triggering jobsets returns 200."); ok($response->is_success, "The API enpdoint for triggering jobsets returns 200.");
my $data = is_json($response); my $data = is_json($response);
@ -128,7 +151,8 @@ subtest "/api/push" => sub {
print STDERR $repo; print STDERR $repo;
my $response = request(GET "/api/push?repos=$repo&force=1"); my $response = request(GET "/api/push?repos=$repo&force=1",
Cookie => $cookie);
ok($response->is_success, "The API enpdoint for triggering jobsets returns 200."); ok($response->is_success, "The API enpdoint for triggering jobsets returns 200.");
my $data = is_json($response); my $data = is_json($response);
@ -172,7 +196,7 @@ subtest "/api/push-github" => sub {
"Content" => encode_json({ "Content" => encode_json({
repository => { repository => {
owner => { owner => {
name => "OWNER", login => "OWNER",
}, },
name => "LEGACY-REPO", name => "LEGACY-REPO",
} }
@ -198,7 +222,7 @@ subtest "/api/push-github" => sub {
"Content" => encode_json({ "Content" => encode_json({
repository => { repository => {
owner => { owner => {
name => "OWNER", login => "OWNER",
}, },
name => "FLAKE-REPO", name => "FLAKE-REPO",
} }

View file

@ -11,20 +11,14 @@ my $ctx = test_context();
Catalyst::Test->import('Hydra'); Catalyst::Test->import('Hydra');
my $user = $ctx->db()->resultset('Users')->create({ $ctx->db(); # Ensure DB initialization.
username => 'alice',
emailaddress => 'root@invalid.org',
password => '!'
});
$user->setPassword('foobar');
$user->userroles->update_or_create({ role => 'admin' });
# Login and save cookie for future requests # Login and save cookie for future requests
my $req = request(POST '/login', my $req = request(POST '/login',
Referer => 'http://localhost/', Referer => 'http://localhost/',
Content => { Content => {
username => 'alice', username => 'root',
password => 'foobar' password => 'rootPassword'
} }
); );
is($req->code, 302, "Logging in gets a 302"); is($req->code, 302, "Logging in gets a 302");

View file

@ -51,7 +51,8 @@ subtest "Read project 'tests'" => sub {
homepage => "", homepage => "",
jobsets => [], jobsets => [],
name => "tests", name => "tests",
owner => "root" owner => "root",
"private" => JSON::MaybeXS::false
}); });
}; };
@ -96,7 +97,8 @@ subtest "Transitioning from declarative project to normal" => sub {
file => "bogus", file => "bogus",
type => "boolean", type => "boolean",
value => "false" value => "false"
} },
"private" => JSON::MaybeXS::false
}); });
}; };
@ -135,7 +137,8 @@ subtest "Transitioning from declarative project to normal" => sub {
homepage => "", homepage => "",
jobsets => [], jobsets => [],
name => "tests", name => "tests",
owner => "root" owner => "root",
"private" => JSON::MaybeXS::false
}); });
}; };
}; };

View file

@ -115,11 +115,13 @@ sub db {
$self->{_db} = Hydra::Model::DB->new(); $self->{_db} = Hydra::Model::DB->new();
if (!(defined $setup && $setup == 0)) { if (!(defined $setup && $setup == 0)) {
$self->{_db}->resultset('Users')->create({ my $user = $self->{_db}->resultset('Users')->create({
username => "root", username => "root",
emailaddress => 'root@invalid.org', emailaddress => 'root@invalid.org',
password => '' password => '!'
}); });
$user->setPassword('rootPassword');
$user->userroles->update_or_create({ role => 'admin' });
} }
} }

168
t/private-projects.t Normal file
View file

@ -0,0 +1,168 @@
use strict;
use warnings;
use Setup;
use Test2::V0;
use HTTP::Request::Common;
use HTML::TreeBuilder::XPath;
use JSON::MaybeXS;
my %ctx = test_init(
use_external_destination_store => 0
);
require Hydra::Schema;
require Hydra::Model::DB;
require Catalyst::Test;
Catalyst::Test->import('Hydra');
my $db = Hydra::Model::DB->new;
hydra_setup($db);
my $scratch = "$ctx{tmpdir}/scratch";
mkdir $scratch;
my $uri = "file://$scratch/git-repo";
my $jobset = createJobsetWithOneInput('gitea', 'git-input.nix', 'src', 'git', $uri, $ctx{jobsdir});
ok(request('/project/tests')->is_success, "Project 'tests' exists");
my $project = $db->resultset('Projects')->find({name => "tests"})->update({private => 1});
ok(
!request('/project/tests')->is_success,
"Project 'tests' is private now and should be unreachable"
);
my $user = $db->resultset('Users')->create({
username => "testing",
emailaddress => 'testing@invalid.org',
password => ''
});
$user->setPassword('foobar');
my $auth = request(
POST(
'/login',
{username => 'testing', 'password' => 'foobar'},
Origin => 'http://localhost', Accept => 'application/json'
),
{host => 'localhost'}
);
ok(
$auth->code == 302,
"Successfully logged in"
);
my $cookie = (split /;/, $auth->header('set_cookie'))[0];
ok(
request(GET(
'/project/tests',
Cookie => $cookie
))->is_success,
"Project visible for authenticated user."
);
updateRepository('gitea', "$ctx{testdir}/jobs/git-update.sh", $scratch);
ok(evalSucceeds($jobset), "Evaluating nix expression");
is(nrQueuedBuildsForJobset($jobset), 1, "Evaluating jobs/runcommand.nix should result in 1 build1");
ok(
request('/eval/1')->code == 404,
'Eval of private project not available for unauthenticated user.'
);
ok(
request(GET '/eval/1', Cookie => $cookie)->is_success,
'Eval available for authenticated User'
);
ok(
request(GET '/jobset/tests/gitea', Cookie => $cookie)->is_success,
'Jobset available for user'
);
ok(
request(GET '/jobset/tests/gitea')->code == 404,
'Jobset unavailable for guest'
);
ok(
request('/build/1')->code == 404,
'Build of private project not available for unauthenticated user.'
);
ok(
request(GET '/build/1', Cookie => $cookie)->is_success,
'Build available for authenticated User'
);
(my $build) = queuedBuildsForJobset($jobset);
ok(runBuild($build), "Build should succeed with exit code 0");
ok(
request(GET '/jobset/tests/gitea/channel/latest', Cookie => $cookie)->is_success,
'Channel available for authenticated user'
);
ok(
request(GET '/jobset/tests/gitea/channel/latest')->code == 404,
'Channel unavailable for guest'
);
updateRepository('gitea', "$ctx{testdir}/jobs/git-update.sh", $scratch);
ok(evalSucceeds($jobset), "Evaluating nix expression");
my $latest_builds_unauth = request(GET "/all");
my $tree = HTML::TreeBuilder::XPath->new;
$tree->parse($latest_builds_unauth->content);
ok(!$tree->exists('/html//tbody/tr'), "No builds available");
my $latest_builds = request(GET "/all", Cookie => $cookie);
$tree = HTML::TreeBuilder::XPath->new;
$tree->parse($latest_builds->content);
ok($tree->exists('/html//tbody/tr'), "Builds available");
my $p2 = $db->resultset("Projects")->create({name => "public", displayname => "public", owner => "root"});
my $jobset2 = $p2->jobsets->create({
name => "public", nixexprpath => 'basic.nix', nixexprinput => "jobs", emailoverride => ""
});
my $jobsetinput = $jobset2->jobsetinputs->create({name => "jobs", type => "path"});
$jobsetinput->jobsetinputalts->create({altnr => 0, value => $ctx{jobsdir}});
updateRepository('gitea', "$ctx{testdir}/jobs/git-update.sh", $scratch);
ok(evalSucceeds($jobset2), "Evaluating nix expression");
is(
nrQueuedBuildsForJobset($jobset2),
3,
"Evaluating jobs/runcommand.nix should result in 3 builds"
);
(my $b1, my $b2, my $b3) = queuedBuildsForJobset($jobset2);
ok(runBuild($b1), "Build should succeed with exit code 0");
ok(runBuild($b2), "Build should succeed with exit code 0");
ok(runBuild($b3), "Build should succeed with exit code 0");
my $latest_builds_unauth2 = request(GET "/all");
$tree = HTML::TreeBuilder::XPath->new;
$tree->parse($latest_builds_unauth2->content);
is(
scalar $tree->findvalues('/html//tbody/tr'),
3,
"Three builds available"
);
my $latest_builds2 = request(GET "/all", Cookie => $cookie);
$tree = HTML::TreeBuilder::XPath->new;
$tree->parse($latest_builds2->content);
is(
scalar $tree->findvalues('/html//tbody/tr'),
4,
"Three builds available"
);
done_testing;