Add markdown files for documentation

projects.xml and declarative-projects.xml were merged with xmllint, and
then I ran that to convert files
for i in *.xml; do pandoc -s -f docbook -t markdown $i -o ${i/xml/md}; done
This commit is contained in:
Ismaël Bouya 2021-02-24 00:25:56 +01:00
parent e072c1d741
commit 9d916877fb
No known key found for this signature in database
GPG key ID: FD1D4EF57FA95902
9 changed files with 1170 additions and 34 deletions

View file

@ -1,33 +1,6 @@
DOCBOOK_FILES = installation.xml introduction.xml manual.xml projects.xml hacking.xml
MD_FILES = src/*.md
EXTRA_DIST = $(DOCBOOK_FILES)
EXTRA_DIST = $(MD_FILES)
xsltproc_opts = \
--param callout.graphics.extension \'.gif\' \
--param section.autolabel 1 \
--param section.label.includes.component.label 1
# Include the manual in the tarball.
dist_html_DATA = manual.html
# Embed Docbook's callout images in the distribution.
EXTRA_DIST += images
manual.html: $(DOCBOOK_FILES)
$(XSLTPROC) $(xsltproc_opts) --nonet --xinclude \
--output manual.html \
$(docbookxsl)/xhtml/docbook.xsl manual.xml
images:
$(MKDIR_P) images/callouts
cp $(docbookxsl)/images/callouts/*.gif images/callouts
chmod +wx images images/callouts
install-data-hook: images
$(INSTALL) -d $(DESTDIR)$(htmldir)/images/callouts
$(INSTALL_DATA) images/callouts/* $(DESTDIR)$(htmldir)/images/callouts
ln -sfn manual.html $(DESTDIR)$(htmldir)/index.html
distclean-hook:
-rm -rf images
install: $(MD_FILES)
mdbook build . -d $(docdir)

View file

@ -0,0 +1,9 @@
# Hydra User's Guide
- [Introduction](introduction.md)
- [Installation](installation.md)
- [Creating and Managing Projects](projects.md)
- [Using the external API](api.md)
-----------
[About](about.md)
[Hacking](hacking.md)

6
doc/manual/src/about.md Normal file
View file

@ -0,0 +1,6 @@
# Authors
* Eelco Dolstra, Delft University of Technology, Department of Software Technology
* Rob Vermaas, Delft University of Technology, Department of Software Technology
* Eelco Visser, Delft University of Technology, Department of Software Technology
* Ludovic Courtès

249
doc/manual/src/api.md Normal file
View file

@ -0,0 +1,249 @@
Using the external API
======================
To be able to create integrations with other services, Hydra exposes an
external API that you can manage projects with.
The API is accessed over HTTP(s) where all data is sent and received as
JSON.
Creating resources requires the caller to be authenticated, while
retrieving resources does not.
The API does not have a separate URL structure for it\'s endpoints.
Instead you request the pages of the web interface as `application/json`
to use the API.
List projects
-------------
To list all the `projects` of the Hydra install:
GET /
Accept: application/json
This will give you a list of `projects`, where each `project` contains
general information and a list of its `job sets`.
**Example**
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org
**Note:** this response is truncated
GET https://hydra.nixos.org/
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"displayname": "Acoda",
"name": "acoda",
"description": "Acoda is a tool set for automatic data migration along an evolving data model",
"enabled": 0,
"owner": "sander",
"hidden": 1,
"jobsets": [
"trunk"
]
},
{
"displayname": "cabal2nix",
"name": "cabal2nix",
"description": "Convert Cabal files into Nix build instructions",
"enabled": 0,
"owner": "simons@cryp.to",
"hidden": 1,
"jobsets": [
"master"
]
}
]
Get a single project
--------------------
To get a single `project` by identifier:
GET /project/:project-identifier
Accept: application/json
**Example**
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org/project/hydra
GET https://hydra.nixos.org/project/hydra
HTTP/1.1 200 OK
Content-Type: application/json
{
"description": "Hydra, the Nix-based continuous build system",
"hidden": 0,
"displayname": "Hydra",
"jobsets": [
"hydra-master",
"hydra-ant-logger-trunk",
"master",
"build-ng"
],
"name": "hydra",
"enabled": 1,
"owner": "eelco"
}
Get a single job set
--------------------
To get a single `job set` by identifier:
GET /jobset/:project-identifier/:jobset-identifier
Content-Type: application/json
**Example**
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org/jobset/hydra/build-ng
GET https://hydra.nixos.org/jobset/hydra/build-ng
HTTP/1.1 200 OK
Content-Type: application/json
{
"errormsg": "evaluation failed due to signal 9 (Killed)",
"fetcherrormsg": null,
"nixexprpath": "release.nix",
"nixexprinput": "hydraSrc",
"emailoverride": "rob.vermaas@gmail.com, eelco.dolstra@logicblox.com",
"jobsetinputs": {
"officialRelease": {
"jobsetinputalts": [
"false"
]
},
"hydraSrc": {
"jobsetinputalts": [
"https://github.com/NixOS/hydra.git build-ng"
]
},
"nixpkgs": {
"jobsetinputalts": [
"https://github.com/NixOS/nixpkgs.git release-14.12"
]
}
},
"enabled": 0
}
List evaluations
----------------
To list the `evaluations` of a `job set` by identifier:
GET /jobset/:project-identifier/:jobset-identifier/evals
Content-Type: application/json
**Example**
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org/jobset/hydra/build-ng/evals
**Note:** this response is truncated
GET https://hydra.nixos.org/jobset/hydra/build-ng/evals
HTTP/1.1 200 OK
Content-Type: application/json
{
"evals": [
{
"jobsetevalinputs": {
"nixpkgs": {
"dependency": null,
"type": "git",
"value": null,
"uri": "https://github.com/NixOS/nixpkgs.git",
"revision": "f60e48ce81b6f428d072d3c148f6f2e59f1dfd7a"
},
"hydraSrc": {
"dependency": null,
"type": "git",
"value": null,
"uri": "https://github.com/NixOS/hydra.git",
"revision": "48d6f0de2ab94f728d287b9c9670c4d237e7c0f6"
},
"officialRelease": {
"dependency": null,
"value": "false",
"type": "boolean",
"uri": null,
"revision": null
}
},
"hasnewbuilds": 1,
"builds": [
24670686,
24670684,
24670685,
24670687
],
"id": 1213758
}
],
"first": "?page=1",
"last": "?page=1"
}
Get a single build
------------------
To get a single `build` by its id:
GET /build/:build-id
Content-Type: application/json
**Example**
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org/build/24670686
GET /build/24670686
HTTP/1.1 200 OK
Content-Type: application/json
{
"job": "tests.api.x86_64-linux",
"jobsetevals": [
1213758
],
"buildstatus": 0,
"buildmetrics": null,
"project": "hydra",
"system": "x86_64-linux",
"priority": 100,
"releasename": null,
"starttime": 1439402853,
"nixname": "vm-test-run-unnamed",
"timestamp": 1439388618,
"id": 24670686,
"stoptime": 1439403403,
"jobset": "build-ng",
"buildoutputs": {
"out": {
"path": "/nix/store/lzrxkjc35mhp8w7r8h82g0ljyizfchma-vm-test-run-unnamed"
}
},
"buildproducts": {
"1": {
"path": "/nix/store/lzrxkjc35mhp8w7r8h82g0ljyizfchma-vm-test-run-unnamed",
"defaultpath": "log.html",
"type": "report",
"sha256hash": null,
"filesize": null,
"name": "",
"subtype": "testlog"
}
},
"finished": 1
}

28
doc/manual/src/hacking.md Normal file
View file

@ -0,0 +1,28 @@
Hacking
=======
This section provides some notes on how to hack on Hydra. To get the
latest version of Hydra from GitHub:
$ git clone git://github.com/NixOS/hydra.git
$ cd hydra
To build it and its dependencies:
$ nix-build release.nix -A build.x86_64-linux
To build all dependencies and start a shell in which all environment
variables (such as PERL5LIB) are set up so that those dependencies can
be found:
$ nix-shell
To build Hydra, you should then do:
[nix-shell]$ ./bootstrap
[nix-shell]$ configurePhase
[nix-shell]$ make
You can run the Hydra web server in your source tree as follows:
$ ./src/script/hydra-server

View file

@ -0,0 +1,237 @@
Installation
============
This chapter explains how to install Hydra on your own build farm
server.
Prerequisites
-------------
To install and use Hydra you need to have installed the following
dependencies:
- Nix
- PostgreSQL
- many Perl packages, notably Catalyst, EmailSender, and NixPerl (see
the [Hydra expression in
Nixpkgs](https://github.com/NixOS/hydra/blob/master/release.nix) for
the complete list)
At the moment, Hydra runs only on GNU/Linux (*i686-linux* and
*x86\_64\_linux*).
For small projects, Hydra can be run on any reasonably modern machine.
For individual projects you can even run Hydra on a laptop. However, the
charm of a buildfarm server is usually that it operates without
disturbing the developer\'s working environment and can serve releases
over the internet. In conjunction you should typically have your source
code administered in a version management system, such as subversion.
Therefore, you will probably want to install a server that is connected
to the internet. To scale up to large and/or many projects, you will
need at least a considerable amount of diskspace to store builds. Since
Hydra can schedule multiple simultaneous build jobs, it can be useful to
have a multi-core machine, and/or attach multiple build machines in a
network to the central Hydra server.
Of course we think it is a good idea to use the
[NixOS](http://nixos.org/nixos) GNU/Linux distribution for your
buildfarm server. But this is not a requirement. The Nix software
deployment system can be installed on any GNU/Linux distribution in
parallel to the regular package management system. Thus, you can use
Hydra on a Debian, Fedora, SuSE, or Ubuntu system.
Getting Nix
-----------
If your server runs NixOS you are all set to continue with installation
of Hydra. Otherwise you first need to install Nix. The latest stable
version can be found one [the Nix web
site](http://nixos.org/nix/download.html), along with a manual, which
includes installation instructions.
Installation
------------
The latest development snapshot of Hydra can be installed by visiting
the URL
[`http://hydra.nixos.org/view/hydra/unstable`](http://hydra.nixos.org/view/hydra/unstable)
and using the one-click install available at one of the build pages. You
can also install Hydra through the channel by performing the following
commands:
nix-channel --add http://hydra.nixos.org/jobset/hydra/master/channel/latest
nix-channel --update
nix-env -i hydra
Command completion should reveal a number of command-line tools from
Hydra, such as `hydra-queue-runner`.
Creating the database
---------------------
Hydra stores its results in a PostgreSQL database.
To setup a PostgreSQL database with *hydra* as database name and user
name, issue the following commands on the PostgreSQL server:
createuser -S -D -R -P hydra
createdb -O hydra hydra
Note that *\$prefix* is the location of Hydra in the nix store.
Hydra uses an environment variable to know which database should be
used, and a variable which point to a location that holds some state. To
set these variables for a PostgreSQL database, add the following to the
file `~/.profile` of the user running the Hydra services.
export HYDRA_DBI="dbi:Pg:dbname=hydra;host=dbserver.example.org;user=hydra;"
export HYDRA_DATA=/var/lib/hydra
You can provide the username and password in the file `~/.pgpass`, e.g.
dbserver.example.org:*:hydra:hydra:password
Make sure that the *HYDRA\_DATA* directory exists and is writable for
the user which will run the Hydra services.
Having set these environment variables, you can now initialise the
database by doing:
hydra-init
To create projects, you need to create a user with *admin* privileges.
This can be done using the command `hydra-create-user`:
$ hydra-create-user alice --full-name 'Alice Q. User' \
--email-address 'alice@example.org' --password foobar --role admin
Additional users can be created through the web interface.
Upgrading
---------
If you\'re upgrading Hydra from a previous version, you should do the
following to perform any necessary database schema migrations:
hydra-init
Getting Started
---------------
To start the Hydra web server, execute:
hydra-server
When the server is started, you can browse to [http://localhost:3000/]()
to start configuring your Hydra instance.
The `hydra-server` command launches the web server. There are two other
processes that come into play:
- The
evaluator
is responsible for periodically evaluating job sets, checking out
their dependencies off their version control systems (VCS), and
queueing new builds if the result of the evaluation changed. It is
launched by the
hydra-evaluator
command.
- The
queue runner
launches builds (using Nix) as they are queued by the evaluator,
scheduling them onto the configured Nix hosts. It is launched using
the
hydra-queue-runner
command.
All three processes must be running for Hydra to be fully functional,
though it\'s possible to temporarily stop any one of them for
maintenance purposes, for instance.
Serving behind reverse proxy
----------------------------
To serve hydra web server behind reverse proxy like *nginx* or *httpd*
some additional configuration must be made.
Edit your `hydra.conf` file in a similar way to this example:
using_frontend_proxy 1
base_uri example.com
`base_uri` should be your hydra servers proxied URL. If you are using
Hydra nixos module then setting `hydraURL` option should be enough.
If you want to serve Hydra with a prefix path, for example
[http://example.com/hydra]() then you need to configure your reverse
proxy to pass `X-Request-Base` to hydra, with prefix path as value. For
example if you are using nginx, then use configuration similar to
following:
server {
listen 433 ssl;
server_name example.com;
.. other configuration ..
location /hydra/ {
proxy_pass http://127.0.0.1:3000;
proxy_redirect http://127.0.0.1:3000 https://example.com/hydra;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-Base /hydra;
}
}
Using LDAP as authentication backend (optional)
-----------------------------------------------
Instead of using Hydra\'s built-in user management you can optionally
use LDAP to manage roles and users.
The `hydra-server` accepts the environment variable
*HYDRA\_LDAP\_CONFIG*. The value of the variable should point to a valid
YAML file containing the Catalyst LDAP configuration. The format of the
configuration file is describe in the
[*Catalyst::Authentication::Store::LDAP*
documentation](https://metacpan.org/pod/Catalyst::Authentication::Store::LDAP#CONFIGURATION-OPTIONS).
An example is given below.
Roles can be assigned to users based on their LDAP group membership
(*use\_roles: 1* in the below example). For a user to have the role
*admin* assigned to them they should be in the group *hydra\_admin*. In
general any LDAP group of the form *hydra\_some\_role* (notice the
*hydra\_* prefix) will work.
credential:
class: Password
password_field: password
password_type: self_check
store:
class: LDAP
ldap_server: localhost
ldap_server_options.timeout: 30
binddn: "cn=root,dc=example"
bindpw: notapassword
start_tls: 0
start_tls_options
verify: none
user_basedn: "ou=users,dc=example"
user_filter: "(&(objectClass=inetOrgPerson)(cn=%s))"
user_scope: one
user_field: cn
user_search_options:
deref: always
use_roles: 1
role_basedn: "ou=groups,dc=example"
role_filter: "(&(objectClass=groupOfNames)(member=%s))"
role_scope: one
role_field: cn
role_value: dn
role_search_options:
deref: always

View file

@ -0,0 +1,173 @@
Introduction
============
About Hydra
-----------
Hydra is a tool for continuous integration testing and software release
that uses a purely functional language to describe build jobs and their
dependencies. Continuous integration is a simple technique to improve
the quality of the software development process. An automated system
continuously or periodically checks out the source code of a project,
builds it, runs tests, and produces reports for the developers. Thus,
various errors that might accidentally be committed into the code base
are automatically caught. Such a system allows more in-depth testing
than what developers could feasibly do manually:
- Portability testing
: The software may need to be built and tested on many different
platforms. It is infeasible for each developer to do this before
every commit.
- Likewise, many projects have very large test sets (e.g., regression
tests in a compiler, or stress tests in a DBMS) that can take hours
or days to run to completion.
- Many kinds of static and dynamic analyses can be performed as part
of the tests, such as code coverage runs and static analyses.
- It may also be necessary to build many different
variants
of the software. For instance, it may be necessary to verify that
the component builds with various versions of a compiler.
- Developers typically use incremental building to test their changes
(since a full build may take too long), but this is unreliable with
many build management tools (such as Make), i.e., the result of the
incremental build might differ from a full build.
- It ensures that the software can be built from the sources under
revision control. Users of version management systems such as CVS
and Subversion often forget to place source files under revision
control.
- The machines on which the continuous integration system runs ideally
provides a clean, well-defined build environment. If this
environment is administered through proper SCM techniques, then
builds produced by the system can be reproduced. In contrast,
developer work environments are typically not under any kind of SCM
control.
- In large projects, developers often work on a particular component
of the project, and do not build and test the composition of those
components (again since this is likely to take too long). To prevent
the phenomenon of \`\`big bang integration\'\', where components are
only tested together near the end of the development process, it is
important to test components together as soon as possible (hence
continuous integration
).
- It allows software to be
released
by automatically creating packages that users can download and
install. To do this manually represents an often prohibitive amount
of work, as one may want to produce releases for many different
platforms: e.g., installers for Windows and Mac OS X, RPM or Debian
packages for certain Linux distributions, and so on.
In its simplest form, a continuous integration tool sits in a loop
building and releasing software components from a version management
system. For each component, it performs the following tasks:
- It obtains the latest version of the component\'s source code from
the version management system.
- It runs the component\'s build process (which presumably includes
the execution of the component\'s test set).
- It presents the results of the build (such as error logs and
releases) to the developers, e.g., by producing a web page.
Examples of continuous integration tools include Jenkins, CruiseControl
Tinderbox, Sisyphus, Anthill and BuildBot. These tools have various
limitations.
- They do not manage the
build environment
. The build environment consists of the dependencies necessary to
perform a build action, e.g., compilers, libraries, etc. Setting up
the environment is typically done manually, and without proper SCM
control (so it may be hard to reproduce a build at a later time).
Manual management of the environment scales poorly in the number of
configurations that must be supported. For instance, suppose that we
want to build a component that requires a certain compiler X. We
then have to go to each machine and install X. If we later need a
newer version of X, the process must be repeated all over again. An
ever worse problem occurs if there are conflicting, mutually
exclusive versions of the dependencies. Thus, simply installing the
latest version is not an option. Of course, we can install these
components in different directories and manually pass the
appropriate paths to the build processes of the various components.
But this is a rather tiresome and error-prone process.
- They do not easily support
variability in software systems
. A system may have a great deal of build-time variability: optional
functionality, whether to build a debug or production version,
different versions of dependencies, and so on. (For instance, the
Linux kernel now has over 2,600 build-time configuration switches.)
It is therefore important that a continuous integration tool can
easily select and test different instances from the configuration
space of the system to reveal problems, such as erroneous
interactions between features. In a continuous integration setting,
it is also useful to test different combinations of versions of
subsystems, e.g., the head revision of a component against stable
releases of its dependencies, and vice versa, as this can reveal
various integration problems.
*Hydra*, is a continuous integration tool that solves these problems. It
is built on top of the [Nix package manager](http://nixos.org/nix/),
which has a purely functional language for describing package build
actions and their dependencies. This allows the build environment for
projects to be produced automatically and deterministically, and
variability in components to be expressed naturally using functions; and
as such is an ideal fit for a continuous build system.
About Us
--------
Hydra is the successor of the Nix Buildfarm, which was developed in
tandem with the Nix software deployment system. Nix was originally
developed at the Department of Information and Computing Sciences,
Utrecht University by the TraCE project (2003-2008). The project was
funded by the Software Engineering Research Program Jacquard to improve
the support for variability in software systems. Funding for the
development of Nix and Hydra is now provided by the NIRICT LaQuSo Build
Farm project.
About this Manual
-----------------
This manual tells you how to install the Hydra buildfarm software on
your own server and how to operate that server using its web interface.
License
-------
Hydra is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation, either version 3 of the License, or (at your
option) any later version.
Hydra is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the [GNU General Public
License](http://www.gnu.org/licenses/) for more details.
Hydra at `nixos.org`
--------------------
The `nixos.org` installation of Hydra runs at
[`http://hydra.nixos.org/`](http://hydra.nixos.org/). That installation
is used to build software components from the [Nix](http://nixos.org),
[NixOS](http://nixos.org/nixos), [GNU](http://www.gnu.org/),
[Stratego/XT](http://strategoxt.org), and related projects.
If you are one of the developers on those projects, it is likely that
you will be using the NixOS Hydra server in some way. If you need to
administer automatic builds for your project, you should pull the right
strings to get an account on the server. This manual will tell you how
to set up new projects and build jobs within those projects and write a
release.nix file to describe the build process of your project to Hydra.
You can skip the next chapter.
If your project does not yet have automatic builds within the NixOS
Hydra server, it may actually be eligible. We are in the process of
setting up a large buildfarm that should be able to support open source
and academic software projects. Get in touch.
Hydra on your own buildfarm
---------------------------
If you need to run your own Hydra installation,
[installation chapter](installation.md) explains how to download and install the
system on your own server.

463
doc/manual/src/projects.md Normal file
View file

@ -0,0 +1,463 @@
Creating and Managing Projects
==============================
Once Hydra is installed and running, the next step is to add projects to
the build farm. We follow the example of the [Patchelf
project](http://nixos.org/patchelf.html), a software tool written in C
and using the GNU Build System (GNU Autoconf and GNU Automake).
Log in to the web interface of your Hydra installation using the user
name and password you inserted in the database (by default, Hydra\'s web
server listens on [`localhost:3000`](http://localhost:3000/)). Then
follow the \"Create Project\" link to create a new project.
Project Information
-------------------
A project definition consists of some general information and a set of
job sets. The general information identifies a project, its owner, and
current state of activity. Here\'s what we fill in for the patchelf
project:
Identifier: patchelf
The *identifier* is the identity of the project. It is used in URLs and
in the names of build results.
The identifier should be a unique name (it is the primary database key
for the project table in the database). If you try to create a project
with an already existing identifier you\'d get an error message from the
database. So try to create the project after entering just the general
information to figure out if you have chosen a unique name. Job sets can
be added once the project has been created.
Display name: Patchelf
The *display name* is used in menus.
Description: A tool for modifying ELF binaries
The *description* is used as short documentation of the nature of the
project.
Owner: eelco
The *owner* of a project can create and edit job sets.
Enabled: Yes
Only if the project is *enabled* are builds performed.
Once created there should be an entry for the project in the sidebar. Go
to the project page for the
[Patchelf](http://localhost:3000/project/patchelf) project.
Job Sets
--------
A project can consist of multiple *job sets* (hereafter *jobsets*),
separate tasks that can be built separately, but may depend on each
other (without cyclic dependencies, of course). Go to the
[Edit](http://localhost:3000/project/patchelf/edit) page of the Patchelf
project and \"Add a new jobset\" by providing the following
\"Information\":
Identifier: trunk
Description: Trunk
Nix expression: release.nix in input patchelfSrc
This states that in order to build the `trunk` jobset, the Nix
expression in the file `release.nix`, which can be obtained from input
`patchelfSrc`, should be evaluated. (We\'ll have a look at `release.nix`
later.)
To realize a job we probably need a number of inputs, which can be
declared in the table below. As many inputs as required can be added.
For patchelf we declare the following inputs.
patchelfSrc
'Git checkout' https://github.com/NixOS/patchelf
nixpkgs 'Git checkout' https://github.com/NixOS/nixpkgs
officialRelease Boolean false
system String value "i686-linux"
Building Jobs
-------------
Build Recipes
-------------
Build jobs and *build recipes* for a jobset are specified in a text file
written in the [Nix language](http://nixos.org/nix/). The recipe is
actually called a *Nix expression* in Nix parlance. By convention this
file is often called `release.nix`.
The `release.nix` file is typically kept under version control, and the
repository that contains it one of the build inputs of the
corresponding--often called `hydraConfig` by convention. The repository
for that file and the actual file name are specified on the web
interface of Hydra under the `Setup` tab of the jobset\'s overview page,
under the `Nix
expression` heading. See, for example, the [jobset overview
page](http://hydra.nixos.org/jobset/patchelf/trunk) of the PatchELF
project, and [the corresponding Nix
file](https://github.com/NixOS/patchelf/blob/master/release.nix).
Knowledge of the Nix language is recommended, but the example below
should already give a good idea of how it works:
let
pkgs = import <nixpkgs> {}; ①
jobs = rec { ②
tarball = ③
pkgs.releaseTools.sourceTarball { ④
name = "hello-tarball";
src = <hello>; ⑤
buildInputs = (with pkgs; [ gettext texLive texinfo ]);
};
build = ⑥
{ system ? builtins.currentSystem }: ⑦
let pkgs = import <nixpkgs> { inherit system; }; in
pkgs.releaseTools.nixBuild { ⑧
name = "hello";
src = jobs.tarball;
configureFlags = [ "--disable-silent-rules" ];
};
};
in
jobs ⑨
This file shows what a `release.nix` file for
[GNU Hello](http://www.gnu.org/software/hello/) would look like.
GNU Hello is representative of many GNU and non-GNU free software
projects:
- it uses the GNU Build System, namely GNU Autoconf, and GNU Automake;
for users, it means it can be installed using the
usual
./configure && make install
procedure
;
- it uses Gettext for internationalization;
- it has a Texinfo manual, which can be rendered as PDF with TeX.
The file defines a jobset consisting of two jobs: `tarball`, and
`build`. It contains the following elements (referenced from the figure
by numbers):
1. This defines a variable `pkgs` holding the set of packages provided
by [Nixpkgs](http://nixos.org/nixpkgs/).
Since `nixpkgs` appears in angle brackets, there must be a build
input of that name in the Nix search path. In this case, the web
interface should show a `nixpkgs` build input, which is a checkout
of the Nixpkgs source code repository; Hydra then adds this and
other build inputs to the Nix search path when evaluating
`release.nix`.
2. This defines a variable holding the two Hydra jobs--an *attribute
set* in Nix.
3. This is the definition of the first job, named `tarball`. The
purpose of this job is to produce a usable source code tarball.
4. The `tarball` job calls the `sourceTarball` function, which
(roughly) runs `autoreconf && ./configure &&
make dist` on the checkout. The `buildInputs` attribute
specifies additional software dependencies for the job.
> The package names used in `buildInputs`--e.g., `texLive`--are the
> names of the *attributes* corresponding to these packages in
> Nixpkgs, specifically in the
> [`all-packages.nix`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/all-packages.nix)
> file. See the section entitled "Package Naming" in the Nixpkgs
> manual for more information.
5. The `tarball` jobs expects a `hello` build input to be available in
the Nix search path. Again, this input is passed by Hydra and is
meant to be a checkout of GNU Hello\'s source code repository.
6. This is the definition of the `build` job, whose purpose is to build
Hello from the tarball produced above.
7. The `build` function takes one parameter, `system`, which should be
a string defining the Nix system type--e.g., `"x86_64-linux"`.
Additionally, it refers to `jobs.tarball`, seen above.
Hydra inspects the formal argument list of the function (here, the
`system` argument) and passes it the corresponding parameter
specified as a build input on Hydra\'s web interface. Here, `system`
is passed by Hydra when it calls `build`. Thus, it must be defined
as a build input of type string in Hydra, which could take one of
several values.
The question mark after `system` defines the default value for this
argument, and is only useful when debugging locally.
8. The `build` job calls the `nixBuild` function, which unpacks the
tarball, then runs `./configure && make
&& make check && make install`.
9. Finally, the set of jobs is returned to Hydra, as a Nix attribute
set.
Building from the Command Line
------------------------------
It is often useful to test a build recipe, for instance before it is
actually used by Hydra, when testing changes, or when debugging a build
issue. Since build recipes for Hydra jobsets are just plain Nix
expressions, they can be evaluated using the standard Nix tools.
To evaluate the `tarball` jobset of the above example, just
run:
$ nix-build release.nix -A tarball
However, doing this with the example as is will probably
yield an error like this:
error: user-thrown exception: file `hello' was not found in the Nix search path (add it using $NIX_PATH or -I)
The error is self-explanatory. Assuming `$HOME/src/hello` points to a
checkout of Hello, this can be fixed this way:
$ nix-build -I ~/src release.nix -A tarball
Similarly, the `build` jobset can be evaluated:
$ nix-build -I ~/src release.nix -A build
The `build` job reuses the result of the `tarball` job, rebuilding it
only if it needs to.
Adding More Jobs
----------------
The example illustrates how to write the most basic
jobs, `tarball` and `build`. In practice, much more can be done by using
features readily provided by Nixpkgs or by creating new jobs as
customizations of existing jobs.
For instance, test coverage report for projects compiled with GCC can be
automatically generated using the `coverageAnalysis` function provided
by Nixpkgs instead of `nixBuild`. Back to our GNU Hello example, we can
define a `coverage` job that produces an HTML code coverage report
directly readable from the corresponding Hydra build page:
coverage =
{ system ? builtins.currentSystem }:
let pkgs = import nixpkgs { inherit system; }; in
pkgs.releaseTools.coverageAnalysis {
name = "hello";
src = jobs.tarball;
configureFlags = [ "--disable-silent-rules" ];
};
As can be seen, the only difference compared to `build` is the use of
`coverageAnalysis`.
Nixpkgs provides many more build tools, including the ability to run
build in virtual machines, which can themselves run another GNU/Linux
distribution, which allows for the creation of packages for these
distributions. Please see [the `pkgs/build-support/release`
directory](https://github.com/NixOS/nixpkgs/tree/master/pkgs/build-support/release)
of Nixpkgs for more. The NixOS manual also contains information about
whole-system testing in virtual machine.
Now, assume we want to build Hello with an old version of GCC, and with
different `configure` flags. A new `build_exotic` job can be written
that simply *overrides* the relevant arguments passed to `nixBuild`:
build_exotic =
{ system ? builtins.currentSystem }:
let
pkgs = import nixpkgs { inherit system; };
build = jobs.build { inherit system; };
in
pkgs.lib.overrideDerivation build (attrs: {
buildInputs = [ pkgs.gcc33 ];
preConfigure = "gcc --version";
configureFlags =
attrs.configureFlags ++ [ "--disable-nls" ];
});
The `build_exotic` job reuses `build` and overrides some of its
arguments: it adds a dependency on GCC 3.3, a pre-configure phase that
runs `gcc --version`, and adds the `--disable-nls` configure flags.
This customization mechanism is very powerful. For instance, it can be
used to change the way Hello and *all* its dependencies--including the C
library and compiler used to build it--are built. See the Nixpkgs manual
for more.
Declarative projects
--------------------
Hydra supports declaratively configuring a project\'s jobsets. This
configuration can be done statically, or generated by a build job.
> **Note**
>
> Hydra will treat the project\'s declarative input as a static definition
> if and only if the spec file contains a dictionary of dictionaries. If
> the value of any key in the spec is not a dictionary, it will treat the
> spec as a generated declarative spec.
### Static, Declarative Projects
Hydra supports declarative projects, where jobsets are configured from a
static JSON document in a repository.
To configure a static declarative project, take the following steps:
1. Create a Hydra-fetchable source like a Git repository or local path.
2. In that source, create a file called `spec.json`, and add the
specification for all of the jobsets. Each key is jobset and each
value is a jobset\'s specification. For example:
``` {.json}
{
"nixpkgs": {
"enabled": 1,
"hidden": false,
"description": "Nixpkgs",
"nixexprinput": "nixpkgs",
"nixexprpath": "pkgs/top-level/release.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"nixpkgs": {
"type": "git",
"value": "git://github.com/NixOS/nixpkgs.git master",
"emailresponsible": false
}
}
},
"nixos": {
"enabled": 1,
"hidden": false,
"description": "NixOS: Small Evaluation",
"nixexprinput": "nixpkgs",
"nixexprpath": "nixos/release-small.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"nixpkgs": {
"type": "git",
"value": "git://github.com/NixOS/nixpkgs.git master",
"emailresponsible": false
}
}
}
}
```
3. Create a new project, and set the project\'s declarative input type,
declarative input value, and declarative spec file to point to the
source and JSON file you created in step 2.
Hydra will create a special jobset named `.jobsets`. When the `.jobsets`
jobset is evaluated, this static specification will be used for
configuring the rest of the project\'s jobsets.
### Generated, Declarative Projects
Hydra also supports generated declarative projects, where jobsets are
configured automatically from specification files instead of being
managed through the UI. A jobset specification is a JSON object
containing the configuration of the jobset, for example:
``` {.json}
{
"enabled": 1,
"hidden": false,
"description": "js",
"nixexprinput": "src",
"nixexprpath": "release.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"src": { "type": "git", "value": "git://github.com/shlevy/declarative-hydra-example.git", "emailresponsible": false },
"nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git release-16.03", "emailresponsible": false }
}
}
```
To configure a declarative project, take the following steps:
1. Create a jobset repository in the normal way (e.g. a git repo with a
`release.nix` file, any other needed helper files, and taking any
kind of hydra input), but without adding it to the UI. The nix
expression of this repository should contain a single job, named
`jobsets`. The output of the `jobsets` job should be a JSON file
containing an object of jobset specifications. Each member of the
object will become a jobset of the project, configured by the
corresponding jobset specification.
2. In some hydra-fetchable source (potentially, but not necessarily,
the same repo you created in step 1), create a JSON file containing
a jobset specification that points to the jobset repository you
created in the first step, specifying any needed inputs
(e.g. nixpkgs) as necessary.
3. In the project creation/edit page, set declarative input type,
declarative input value, and declarative spec file to point to the
source and JSON file you created in step 2.
Hydra will create a special jobset named `.jobsets`, which whenever
evaluated will go through the steps above in reverse order:
1. Hydra will fetch the input specified by the declarative input type
and value.
2. Hydra will use the configuration given in the declarative spec file
as the jobset configuration for this evaluation. In addition to any
inputs specified in the spec file, hydra will also pass the
`declInput` argument corresponding to the input fetched in step 1.
3. As normal, hydra will build the jobs specified in the jobset
repository, which in this case is the single `jobsets` job. When
that job completes, hydra will read the created jobset
specifications and create corresponding jobsets in the project,
disabling any jobsets that used to exist but are not present in the
current spec.
Email Notifications
-------------------
Hydra can send email notifications when the status of a build changes.
This provides immediate feedback to maintainers or committers when a
change causes build failures.
The simplest approach to enable Email Notifications is to use the ssmtp
package, which simply hands off the emails to another SMTP server. For
details on how to configure ssmtp, see the documentation for the
`networking.defaultMailServer` option. To use ssmtp for the Hydra email
notifications, add it to the path option of the Hydra services in your
`/etc/nixos/configuration.nix` file:
systemd.services.hydra-queue-runner.path = [ pkgs.ssmtp ];
systemd.services.hydra-server.path = [ pkgs.ssmtp ];

View file

@ -162,7 +162,7 @@
buildInputs =
[ makeWrapper autoconf automake libtool unzip nukeReferences pkgconfig libpqxx
gitAndTools.topGit mercurial darcs subversion breezy openssl bzip2 libxslt
final.nix perlDeps perl
final.nix perlDeps perl mdbook
boost
postgresql_11
(if lib.versionAtLeast lib.version "20.03pre"
@ -179,8 +179,6 @@
gzip bzip2 lzma gnutar unzip git gitAndTools.topGit mercurial darcs gnused breezy
] ++ lib.optionals stdenv.isLinux [ rpm dpkg cdrkit ] );
configureFlags = [ "--with-docbook-xsl=${docbook_xsl}/xml/xsl/docbook" ];
shellHook = ''
PATH=$(pwd)/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/src/hydra-eval-jobs:$(pwd)/src/hydra-queue-runner:$PATH
PERL5LIB=$(pwd)/src/lib:$PERL5LIB