man apt-cacher (Commandes) - caching proxy for Debian packages


apt-cacher - caching proxy for Debian packages

SYNOPSIS [mirror] / [distribution]


Apt-cacher is a caching proxy for Debian packages, allowing a number of computers to share a single local cache. Packages requested from the cache only need to be downloaded from the Debian mirrors once, no matter how many local machines need to install them. This saves internet bandwidth and improves performance for local users, and reduces the load on the mirrors.

Setting up apt-cacher involves two stages: installing apt-cacher itself on a single machine on your network, and configuring all local machines to use the cache.

The cache machine needs apt-cacher installed, which runs as a daemon or as a CGI under a web server (eg. Apache). When Apt requests a package from the cache machine, the request is handled by apt-cacher which checks whether it already has that particular package. If so, the package is returned immediately to Apt for installation. If not, or if the package in the local cache has been superseded by a more recent version, the package is fetched from the specified mirror. While being fetched it is simultaneously streamed to Apt, and also saved to the local cache for future use.

Other machines on your network do not need apt-cacher installed in order to use the local cache. The only modification to each client computer is to prepend the cache machine's address and script identifier to each HTTP mirror in /etc/apt/sources.list (depending on the installation method (see below). Apt then sends all its package requests to the cache machine, rather than directly to the mirror.

For example, if you have a line in your sources.list that looks like this:

deb unstable main contrib non-free

You would add '' at the start to make it look like this:

deb unstable main contrib non-free

where '' is the address of the computer running your apt-cacher cache.

For installations using a web server (CGI) the prefix should be '' instead.

Apt-cacher currently only handles forwarding to HTTP sources. Support for other access methods (ftp, rsync) is not planed because the maintainer believies that the outcome is not worth the efforts.


Apt-cacher can be installed in various ways. The recommended is running the program as a stand-alone daemon. This should get the best performance and the lowest overall memory usage.

To run apt-cacher in stand-alone mode, edit the file /etc/default/apt-cacher to enable the execution, then run the init script to start the daemon. The local clients can access the daemon using the 'http://server:PORT/' prefix as described above. The default port is 3142 but can be changed in the config file, see above.

An alternative way is using the inetd program. This is a good method if the daemon program should not be loaded all the time. Access control checks are to be done using inetd or tcpd wrapper. See README.Debian for details.

The last method is using the CGI mode. It is not recommended for long-term use because it brings a visible performance impact on the network and server speed. By default, apt-cacher package adds a default configuration profile to Apache. The URL prefix for sources.list is '' for a server in this mode.

In both daemon modes you can use apt-cacher to simulate a web server. When bound to port 80 it will even emulate the behaviour of the previous CGI-based versions. However, you cannot run another web server on that port in this case, and only root can start apt-cacher directly because 80 is a privileged port.


Apt-cacher uses a configuration file for setting important options. Additionally there are few command line options to control the start behaviour which you get by running apt-cacher with the "-h" argument.

The configuration file is installed by default as /etc/apt-cacher/apt-cacher.conf. It is read every time when the daemon starts or CGI/inetd slave are executed. Therefore apt-cacher (daemon) may need to be restarted using the init script in order to reread its configuration.

The options available in the config file (and their default settings) are:

cache_dir [/var/cache/apt-cacher]
The directory where apt-cacher will store local copies of all packages requested. This can grow to many hundreds of MB, so make sure it is on a partition with plenty of room. NOTE: the cache directory needs to contain some subdirectories for correct storage management. If you try to create a custom directory, please use the script /usr/share/apt-cacher/ or use the initialy created cache directory as example.
admin_email [root@localhost]
The email address of the administrator is displayed in the info page and traffic reports.
allowed_hosts [*]
If your apt-cacher machine is directly exposed to the Internet and you are worried about unauthorised machines fetching packages through it, you can specify a range of IP addresses that are allowed to use it. Localhost ( is explicitly always allowed, other addresses must be matched by allowed_hosts and not by denied_hosts to be permitted to use the cache. Note that by default apt-cacher will allow requests from any client, so set a range here if you want to restrict access. See the default configuration file apt-cacher.conf for further details and examples.
The opposite of allowed_hosts setting, excludes hosts from the list of allowed hosts.
Like allowed_hosts for IPv6 clients.
Like denied_hosts for IPv6 clients.
Access control list based on the first part of the requested URL path (the server names). A comma separated list.
A mapping scheme to rewrite the URLs to one or more backend servers. See the default apt-cacher.conf file for examples.
generate_reports [1]
Whether to generate traffic reports daily. Traffic reports can be accessed by pointing a browser to (using the address of your own cache of course).
clean_cache [1]
Whether to flush old packages from your cache daily. Packages are deleted on the basis of whether they have been superseded by newer packages, not on age, so it should be safe to leave this on.
logdir [/var/log/apt-cacher]
Directory to use for the access and error log files and traffic report. The access log records all successful package requests using a timestamp, whether the request was fulfilled from cache, the IP address of the requesting computer, the size of the package transferred, and the name of the package. The error log records major faults, and is also used for debug messages if the debug directive is set to 1.
expire_hours [0]
How many hours Package and Release files are cached before they are assumed to be too old and must be re-fetched. Setting 0 means that the validity of these files is checked on each access by comparing time stamps in HTTP headers on the server with those stored locally.
http_proxy []
Apt-cacher can pass all its requests to an external http proxy like Squid, which could be very useful if you are using an ISP that blocks port 80 and requires all web traffic to go through its proxy. The format is 'hostname:port', eg: ''.
use_proxy [0]
Use of an external proxy can be turned on or off with this flag. Value should be either 0 (off) or 1 (on):
http_proxy_auth [proxyuser:proxypass]
External http proxy sometimes need authentication to get full access. The format is 'username:password', eg: 'proxyuser:proxypass'.
use_proxy_auth [0]
Use of external proxy authentication can be turned on or off with this flag. Value should be either 0 (off) or 1 (on):
limit [0]
Rate limiting sets the maximum bandwidth in bytes per second to use for fetching packages. Syntax is fully defined in 'man wget'. Use 'k' or 'm' to use kilobits or megabits / second: eg, 'limit=25k'. Use 0 or a negative value for no rate limiting.
debug [0]
Whether debug mode is enabled. Off by default. When turned on, lots of extra debug junk will be spewed to the error log. This can make the error log become quite big, so only use it when trying to debug problems.
daemon_port [3142]
The TCP port for the daemon in stand-alone mode.
daemon_addr [unset]
The daemon can be limited to listen to only this local IP address. Use with care.
user [unset]
The effective user id to change to after allocating the ports. Useful to bind privileged port like 80.
group [unset]
The effective group id to change to.


Q: Can I just copy some .debs into the cache dir and have it work (precaching)?

A: Yes! However a bit additional work is required to make them useable and persistent in the cache.

First: alongside with the debs apt-cacher stores additional information: a flag file to verify that the package is completely downloaded, and a file with HTTP headers that have been sent from the server. If you copy .debs straight into the storage directory and don't add those things, fetching them *will* fail.

Fortunately Apt-cacher now comes with an import helper script to make things easier. Just put a bunch of .debs into /var/cache/apt-cacher/import (or a directory called 'import' inside whatever you've set your cache dir to be), and run /usr/share/apt-cacher/ (you can specify alternative source directory with the first parameter). The script will run through all the package files it finds in that dir and move them around to the correct locations plus create additional flag/header files. Run it with "-h" to get more information about how to use additional features - it can work in recursive mode while discovering the files and save space by making links to files located elsewhere in the filesystem.

Second: if the daily cleanup operation is enabled (see clean_cache option above) and there is no Packages.gz (or .bz2) file that refers to the new files, the package files will be removed really soon. From another point of view: if there are potential clients that would download these packages and the clients did run "apt-get update" using apt-cacher once, there is no reason to worry.

Q: Does the daily generation of reports or cleaning the cache depend on whether apt-cacher is running continuously as a daemon?

A: No, the regular maintainence jobs are independent of a running server. They are executed by cron and use only static data like logs and cached index files and package directory listing. However, apt-cacher should be configured correctly because cleanup runs it directly (in inetd mode) to refresh the Packages/Sources files.

Q: Are host names permissible? What if a host is in both lists (a literal reading of the current description is that the host is denied)?

A: Unlike with some other software like Apache, the access control is slightly different because there is no configurable checking order. Instead, a client host is checked using both filters, allowed_hosts and denied_hosts. Following combinations are possible: allowed_hosts=* and denied_hosts is empty, then every host is allowed; allowed_hosts=<ip data> and denied_hosts=empty, then only defined hosts are permitted; allowed_hosts=* and denied_hosts=<ip data>, then every host is accepted except of those matched by denied_hosts; allowed_hosts=<ip data> and denied_hosts=<ip data>, then only the clients from allowed_hosts are accepted except of those matched by denied_hosts. allowed_hosts=<empty> blocks everything. If allowed_hosts is omited, * is assumed. denied_hosts must not have an "*" value, use empty allowed_hosts setting if you want that.

Q: generate_reports: how does being able to view the reports depend on the web server you are running? Are they only available if apt-cacher is running on port 80?

The report is generated using a script (started by a cron job, see above) and is stored as $logdir/report.html. You can access it using the "/report" path in the access URL. If apt-cacher is running on port 80 (or in CGI mode), then the URL for the browser looks like http://apt-cacher-server/report/.


/etc/apt-cacher (main configuration directory), /etc/apt-cacher/apt-cacher.conf (main configuration file), /var/log/apt-cacher (log directory, rotated by logrotate if available) /var/log/report.html (report page, generated by the helper script).


Apt-cacher was originally written by Nick Andrews <>. This manual page was originally written by Jonathan Oxer <>, for the Debian GNU/Linux system (but may be used by others), and is now maintained by Eduard Bloch <>.