Before following these steps, please coordinate with
portmgr
.
This section is only of interest when considering tier-2 architectures.
Here are the requirement for what a node needs to be generally useful.
CPU capacity: anything less than 500MHz is generally not useful for package building.
We are able to adjust the number of jobs dispatched to each machine, and we generally tune the number to use 100% of CPU.
RAM: Less than 2G is not very useful; 8G or more is preferred. We have been tuning to one job per 512M of RAM.
disk: at least 20G is needed for filesystem; 32G is
needed for swap. Best performance will be if multiple
disks are used, and configured as geom
stripes. Performance numbers are also TBA.
Package building will test disk drives to destruction. Be aware of what you are signing up for!
network bandwidth: TBA. However, an 8-job machine has been shown to saturate a cable modem line.
Pick a unique hostname. It does not have to be a publicly resolvable hostname (it can be a name on your internal network).
By default, package building requires the following TCP
ports to be accessible: 22 (ssh
), 414
(infoseek
), and 8649
(ganglia
). If these are not accessible,
pick others and ensure that an ssh
tunnel
is set up (see below).
(Note: if you have more than one machine at your site,
you will need an individual TCP port for each service on
each machine, and thus ssh
tunnels
will be necessary. As such, you will probably need to
configure port forwarding on your firewall.)
Decide if you will be booting natively or via
pxeboot
. You will find that it is
easier to keep up with changes to -current
with the latter, especially if you have multiple machines
at your site.
Pick a directory to hold ports configuration and
chroot
subdirectories. It may be
best to put it this on its own partition. (Example:
/usr2/
.)
The filename chroot
is a
historical remnant. The chroot
command is no longer used.
Decide if you will be using a local squid cache on the client, instead of the server. It is more efficient to run it on the server. If you are doing that, skip the "squid" steps below.)
Create a directory to contain the latest
-current
source tree and check it
out. (Since your machine will likely be asked to build
packages for -current
, the kernel it
runs should be reasonably up-to-date with the
bindist
that will be exported
by our scripts.)
If you are using pxeboot
: create a
directory to contain the install bits. You will probably
want to use a subdirectory of /pxeroot
,
e.g.,
/pxeroot/
.
Export that as ${arch}
-${branch}
DESTDIR
.
If you are cross-building, export
TARGET_ARCH
=${arch}
.
The procedure for cross-building ports is not yet defined.
Generate a kernel config file. Include
GENERIC
(or, if on i386™, and
you are using more than
3.5G, PAE
).
Required options:
Suggested options:
If you are interested in debugging general problems, you may wish to use the following. However, for unattended operations, it is best to leave it out:
For PAE
, it is not currently possible
to load modules. Therefore, if you are running an architecture
that supports Linux emulation, you will need to add:
Also for PAE
, as of 20110912 you need
the following. This needs to be investigated:
As root, do the usual build steps, e.g.:
#
make -j4 buildworld
#
make buildkernel KERNCONF=${kernconf}
#
make installkernel KERNCONF=${kernconf}
#
make installworld
The install steps use DESTDIR
.
Customize files in etc/
.
Whether you do this on the client itself, or another
machine, will depend on whether you are using
pxeboot
.
If you are using pxeboot
: create
a subdirectory of
called ${DESTDIR}
conf/
. Create one subdirectory
default/etc/
, and (if your site will host
multiple nodes), subdirectories
to contain override files for individual hosts. (You may find
it handy to symlink each of those directories to a hostname.)
Copy the entire contents of
${ip-address}
/etc/
to ${DESTDIR}
/etc/default/etc/
; that is where you will
edit your files. The by-ip-address
etc/
directories will probably only need
customized rc.conf
files.
In either case, apply the following steps:
Create a
portbuild
user and group. It can have the '*'
password.
Create
/home/portbuild/.ssh/
and populate authorized_keys
.
If you are using ganglia for monitoring, add the following user:
Add it to etc/group
as well.
If you are using a local squid cache on the client, add the following user:
Add it to etc/group
as well.
Create the appropriate files in
etc/.ssh/
.
In etc/crontab
: add
Create the appropriate
etc/fstab
. (If you have multiple,
different, machines, you will need to put those in
the override directories.)
In etc/inetd.conf
: add
You should run the cluster on UTC. If you have not set the clock to UTC:
#
cp -p /usr/share/zoneinfo/Etc/UTC etc/localtimeCreate the appropriate
etc/rc.conf
. (If you are using
pxeboot
, and have multiple,
different, machines, you will need to put those in
the override directories.)
Recommended entries for physical nodes:
${hostname}
"
inetd_enable="YES"
linux_enable="YES"
nfs_client_enable="YES"
ntpd_enable="YES"
sendmail_enable="NONE"
sshd_enable="YES"
sshd_program="/usr/local/sbin/sshd"If you are using ganglia for monitoring, add the following
If you are using a local squid cache on the client, add the following
/a
/squid/logs
"
squid_pidfile="/a
/squid/logs/squid.pid
"Required entries for VMWare-based nodes:
Recommended entries for VMWare-based nodes:
/a
/squid/logs
"
squid_pidfile="/a
/squid/logs/squid.pid
"ntpd(8) should not be enabled for VMWare instances.
Also, it may be possible to leave
squid disabled by default
so as to not have
/
persistent (which should save instantiation time.)
Work is still ongoing.
a
Create etc/resolv.conf
, if
necessary.
Modify etc/sysctl.conf
:
/a
/%N.core
> kern.sugid_coredump=1
> #debug.witness_ddb=0
> #debug.witness_watch=0
>
> # squid needs a lot of fds (leak?)
> kern.maxfiles=40000
> kern.maxfilesperproc=30000
>
> # Since the NFS root is static we do not need to check frequently for file changes
> # This saves >75% of NFS traffic
> vfs.nfs.access_cache_timeout=300
> debug.debugger_on_panic=1
>
> # For jailing
> security.jail.sysvipc_allowed=1
> security.jail.allow_raw_sockets=1
> security.jail.chflags_allowed=1
> security.jail.enforce_statfs=1
>
> vfs.lookup_shared=1If desired, modify etc/syslog.conf
to change the logging destinations to
@pointyhat.freebsd.org
.
Install the following ports:
You may also wish to install:
If you are using ganglia for monitoring, install the following:
If you are using a local squid cache on the client, install the following
Customize files in usr/local/etc/
.
Whether you do this on the client itself, or another
machine, will depend on whether you are using
pxeboot
.
The trick of using conf
override subdirectories is less effective here, because
you would need to copy over all subdirectories of
usr/
. This is an implementation
detail of how the pxeboot works.
Apply the following steps:
If you are using ganglia,
modify
usr/local/etc/gmond.conf
:
${arch}
package build cluster"
> owner = "portmgr@FreeBSD.org"
24c24
< url = "unspecified"
---
> url = "http://pointyhat.freebsd.org"If there are machines from more than one cluster in the same multicast domain (basically = LAN) then change the multicast groups to different values (.71, .72, etc).
Create
usr/local/etc/rc.d/portbuild.sh
,
using the appropriate value for
scratchdir
:
/a
ln -sf ${scratchdir}/portbuild /var/
# Identify builds ready for use
cd /var/portbuild/arch
for i in */builds/*; do
if [ -f ${i}/.ready ]; then
mkdir /tmp/.setup-${i##*/}
fi
done
# Flag that we are ready to accept jobs
touch /tmp/.boot_finishedIf you are using a local squid
cache, modify,
usr/local/etc/squid/squid.conf
:
Also, change usr/local
to
in
usr2
cache_dir
,
access_log
,
cache_log
,
cache_store_log
,
pid_filename
,
netdb_filename
,
coredump_dir
.
Finally, change the cache_dir
storage scheme from ufs
to
aufs
(offers better performance).
Configure ssh
: copy
etc/ssh
to
usr/local/etc/ssh
and add
NoneEnabled yes
to
sshd_config
.
This step is under review.
Create
usr/local/etc/sudoers/sudoers.d/portbuild
:
Change into the port/package directory you picked
above, e.g.,
cd
./
usr2
As root:
#
mkdir portbuild
#
chown portbuild:portbuild portbuild
#
mkdir pkgbuild
#
chown portbuild:portbuild pkgbuild
If you are using a local squid cache:
#
mkdir squid
#
mkdir squid/cache
#
mkdir squid/logs
#
chown -R squid:squid squid
If clients preserve /var/portbuild
between boots then they must either preserve their
/tmp
, or revalidate their available
builds at boot time (see the script on the amd64
machines). They must also clean up stale jails from previous
builds before creating /tmp/.boot_finished
.
Boot the client.
If you are using a local squid
cache, as root, initialize the squid
directories:
squid -z
These steps need to be taken by a portmgr
acting as portbuild
on the server.
If any of the default TCP ports is not available (see
above), you will need to create an ssh
tunnel for them and include its invocation command in
portbuild
's
crontab
.
Unless you can use the defaults, add an entry to
/home/portbuild/.ssh/config
to specify the public IP address, TCP port for
ssh
, username, and any other necessary
information.
Create
/a/portbuild/
.${arch}
/clients/bindist-${hostname}
.tar
Copy one of the existing ones as a template and unpack it in a temporary directory.
Customize etc/resolv.conf
for the local site.
Customize etc/make.conf
for
FTP fetches for the local site. Note: the nulling-out
of MASTER_SITE_BACKUP
must be common
to all nodes, but the first entry in
MASTER_SITE_OVERRIDE
should be the
nearest local FTP mirror. Example:
friendly-local-ftp-mirror
/pub/FreeBSD/ports/distfiles/${DIST_SUBDIR}/ \
ftp://${BACKUP_FTP_SITE}/pub/FreeBSD/distfiles/${DIST_SUBDIR}/
.endiftar
it up and move it to the right
location.
Hint: you will need one of these for each machine;
however, if you have multiple machines at one site, you
should create a site-specific one (e.g., in
/a/portbuild/conf/clients/
)
and symlink to it.
Create
/a/portbuild/
using one of the existing ones as a guide. This
file contains overrides to
${arch}
/portbuild-${hostname}
/a/portbuild/
.${arch}
/portbuild.conf
Suggested values:
/usr2
/pkgbuild
client_user=portbuild
sudo_cmd="sudo -H"
rsync_gzip=-z
infoseek_host=localhost
infoseek_port=${tunelled-tcp-port}
If you will be using squid on the client:
/usr2
/squid
If, instead, you will be using squid on the server:
servername
:3128/"Possible other values:
These steps need to be taken by a portmgr
acting as root
on pointyhat
.
Add the public IP address to
/etc/hosts.allow
. (Remember, multiple
machines can be on the same IP address.)
If you are using ganglia,
add an appropriate data_source
entry to
/usr/local/etc/gmetad.conf
:
arch
/location
Package Build Cluster" 30 hostname
You will need to restart gmetad
.
These steps need to be taken by a portmgr
acting as portbuild
:
Ensure that ssh
to the client
is working by executing
ssh
.
The actual command is not important; what is important is to
confirm the setup, and also add an entry into
hostname
uname -aknown_hosts
, once you have confirmed the
node's identity.
Populate the client's copy of
/var/portbuild/scripts/
by something like
/a/portbuild/scripts/dosetupnode
.
Verify that you now have files in that directory.arch
major
latest hostname
Test the other TCP ports by executing
telnet
.
hostname
portnumber
414
(or its tunnel) should give you a few lines of status
information including arch
and
osversion
; 8649
should
give you an XML
response from
ganglia
.
This step needs to be taken by a portmgr
acting as portbuild
:
Tell qmanager
about the node. Example:
python
path
/qmanager/qclient add
name=uniquename
arch=arch
osversion=osversion
numcpus=number
haszfs=0
online=1
domain=domain
primarypool=package
pools="package all" maxjobs=1
acl="ports-arch
,deny_all"
Finally, again as portmgr
acting as portbuild
:
Once you are sure that the client is working, tell
pollmachine about it by adding
it to
/a/portbuild/
.${arch}
/mlist
This, and other documents, can be downloaded from http://ftp.FreeBSD.org/pub/FreeBSD/doc/
For questions about FreeBSD, read the
documentation before
contacting <questions@FreeBSD.org>.
For questions about this documentation, e-mail <doc@FreeBSD.org>.