We really don't want to cache them for a year, which is the default.
Yes, computing them may be expensive, but not worth a multi-gigabyte
redis database that takes minutes to load into RAM on service (re)start.
It has been a recurring issue that flake lockfile bumps in this repo
here make the forgejo patches no longer apply.
The dedicated repository (nix-forgejo) solves this by not overriding the
existing forgejo derivation from nixpkgs but rather having its own.
Additionally, nix-forgejo pins and uses a "known good" nixpkgs revision
itself, unless `pkgs` is passed on import.
So if issues should arise after a flake bump, we can use that revision
by modifying our import statement, or we can rollback the nix-forgejo
revision itself.
Moving forgejo out of tree also makes iterating on it a lot easier and
opens a lot of other possibilities :)
Username and vhost creation are out of band and manual.
$ cd /var/lib/rabbitmq
$ sudo -u rabbitmq rabbitmqctl create_user ofborg $pwd
$ sudo -u rabbitmq rabbitmqctl set_permissions ofborg '.*' '.*' '.*'
Here's a simple way to reproduce that setup on the RabbitMQ server.
Doing better will require the Vault server which will come soon anyway.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
Status & checks RPC & event queue.
The status & checks is set by the rest of OfBorg, the web service needs
to be exposed.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
This pipes events from Gerrit into the whole AMQP broker and enable all
the system to react to VCS changes.
We need a filter to transform raw Gerrit events into ofBorg specific
events that we will continue to send in the system.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
Via a fork of the Linux Foundation, called OpenBao.
The module supports high availability but we only have one node for now.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
Introduce a data-only module to perform abstraction on the deployment,
we use it for WAN for now.
The usecase is service discovery for simple cases.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
Now, we won't pile a bunch of failed streaming attempts and this will
automatically push to git.
Credentials are left to be done for the push to actually work.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
We are running into too many out of disk space situations with OWS on
the main disk.
This way, we can reuse the Gerrit disk for all that data, which
hopefully, is quite shared with Gerrit.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
This help us getting rid of useless traffic by crawlers.
It is enabled for gerrit01 which is suffering the most from this.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
At Lix, we have few aarch64-linux and aarch64-darwin systems we use to
boost our CI.
This is a module to handle tenant-specific extra build capacity without
it leaking over the rest of the deployment.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
Lix may have its own secrets and we want to maintain a certain
generalization level on the NixOS modules, so we can decorrelate which
secret we select dynamically by having a simple tenancy hierarchy
system.
This unfortunately requires to rewrite all call sites with a floral
prefix until we migrate them to the simple internal secret module which
is aware of this.
Signed-off-by: Raito Bezarius <masterancpp@gmail.com>