aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorflu0r1ne <flu0r1ne@flu0r1ne.net>2023-09-12 05:19:00 -0500
committerflu0r1ne <flu0r1ne@flu0r1ne.net>2023-09-12 05:19:00 -0500
commit065d367504ae3e9b7141f7a7daa6fe1db7aeb4e7 (patch)
treeebdc28708331a5b105e0bc2a4e9cb5185b1a26c1
downloadrbuild-065d367504ae3e9b7141f7a7daa6fe1db7aeb4e7.tar.xz
rbuild-065d367504ae3e9b7141f7a7daa6fe1db7aeb4e7.zip
Add README and script
-rw-r--r--LICENSE21
-rw-r--r--README.md79
-rw-r--r--rbuild.py192
3 files changed, 292 insertions, 0 deletions
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..8f2cbdf
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2023 Flu0r1ne
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..3d22a5b
--- /dev/null
+++ b/README.md
@@ -0,0 +1,79 @@
+# Docker Auto Rebuild Script (`rbuild`)
+
+> Note: This script is experimental. Contributions are welcome.
+
+This short Python script automatically rebuilds Docker Compose deployments to apply security updates
+and pull new images, ideally from a stable release if the application is well containerized. Docker
+Compose is frequently used for small one-off deployments in CI/CD pipelines, small to medium businesses,
+and self-hosted services. In such scenarios, utilizing a full-fledged container registry and monitoring
+service like `Watchtower` might be excessive. This is a lightweight, zero-dependency script that can be
+scheduled using `cron` or a similar task scheduler. It triggers a rebuild if `BUILD_TTL` seconds have passed,
+if the configuration file changes, or if the `--force-rebuild` flag is set. The script also cleans up old containers
+after rebuilding. It assumes that security updates will be applied during the build process and invalidates the image
+cache to ensure a rebuild.
+
+However, this script has *numerous limitations*. It is not suitable for environments requiring high availability, as
+it lacks support for rolling updates and rollbacks in the event of container failures. Additionally, it doesn't offer
+auto-scaling capabilities. For those requirements, consider using Kubernetes or a similar orchestration platform.
+
+> Initially, my ambition was to design a system that could automatically roll back if an image failed after an update.
+In fact, I planned to implement a rollback policy akin to Docker's own `restart-policy`. If I was going to take this on,
+I wanted to do it right. That meant the program would need to daemonize itself, listen for `docker events` to catch any
+container failures, and use a timer to trigger rebuilds. Things quickly became complicated as I considered the state management
+needed for each service—each potentially having fallback images, active images, and newly built images. I also considered
+questions like how the daemon would maintain its state across restarts, and what the consequences might be if a human operator
+were to accidentally remove images. These are all legitimate questions with viable solutions, but as I stared at the escalating
+complexity of the required state machine, I realized this wasn't something I could knock out in a single night. While it's still
+on my radar, it's taken a backseat since this simpler version fulfills my current needs.
+
+## How to Use
+
+To schedule the script to run at 3:00 a.m. on an Ubuntu host, follow these steps:
+
+1. Make the script executable:
+
+ ```bash
+ chmod +x rbuild.py
+ mv rbuild.py /usr/local/bin/
+ ```
+
+2. Open the crontab editor:
+
+ ```bash
+ crontab -e
+ ```
+
+3. Add the following line to run the script every 30 minutes:
+
+ ```cron
+ */30 * * * * /usr/local/bin/rbuild.py
+ ```
+
+### Environment Variables
+
+- `BUILD_PERIOD`: Time (in seconds) after which the script triggers a rebuild. Defaults to 86400 (1 day).
+- `UP_TIMEOUT_PERIOD`: Time (in seconds) that the script will wait while bringing up containers. Defaults to 60 seconds.
+
+## Usage
+
+```
+usage: rbuild.py [-h] [--build-period BUILD_PERIOD] [--up-timeout-period UP_TIMEOUT_PERIOD] [--force-rebuild] [--remove-images] filename
+
+Automatically rebuild a series of containers using Docker Compose.
+
+positional arguments:
+ filename The docker-compose file to use.
+
+options:
+ -h, --help Show this help message and exit.
+ --build-period BUILD_PERIOD
+ Rebuild period in seconds.
+ --up-timeout-period UP_TIMEOUT_PERIOD
+ Timeout period for bringing up containers in seconds.
+ --force-rebuild Force a rebuild of all containers.
+ --remove-images Remove all existing images.
+```
+
+## License
+
+MIT
diff --git a/rbuild.py b/rbuild.py
new file mode 100644
index 0000000..c39e33e
--- /dev/null
+++ b/rbuild.py
@@ -0,0 +1,192 @@
+#!/bin/env python3
+
+import argparse
+import hashlib
+import json
+import subprocess
+import tempfile
+import os
+import sys
+from datetime import datetime
+from typing import (
+ List,
+ Dict,
+ Any,
+ Optional,
+ Set,
+ Tuple
+)
+
+# Constants for labels
+CONFIG_HASH_LABEL = 'rbuild.config_sha256'
+BUILD_TIME_LABEL = 'rbuild.build_time'
+COMPOSE_NAME_LABEL = 'rbuild.compose_name'
+
+def parse_env_var_to_int(key: str, default: Optional[int] = None) -> int:
+ """Parses an environment variable to an integer value."""
+ try:
+ timeout = os.getenv(key, default=default)
+ return int(timeout)
+ except (ValueError, TypeError):
+ raise ValueError(f'Failed to parse "{key}", should be a numeric time in seconds')
+
+def die(*kargs, exit_status : int = 1, **kwargs):
+ """Kill the program with a custom message"""
+ print(*kargs, **kwargs, file=sys.stderr)
+ sys.exit(exit_status)
+
+def run_command(command: List[str]) -> str:
+ """Runs a command and returns the stdout."""
+
+ command_str = ' '.join(command)
+ try:
+ result = subprocess.run(command, stdout=subprocess.PIPE, text=True, check=True)
+ except subprocess.CalledProcessError as e:
+ die(f'Failed to run command: "{command_str}", exited={e.returncode}')
+ except subprocess.TimeoutExpired:
+ die(f'Command timed out: "{command_str}"')
+ return result.stdout
+
+def is_image_expired(image: str, config_sha256: str, build_ttl: int) -> bool:
+ """Checks if a docker image is expired."""
+ inspect_output = run_command(['docker', 'inspect', image])
+ labels = json.loads(inspect_output)[0]['Config']['Labels']
+
+ if labels.get('rbuild.config_sha256') != config_sha256:
+ return True
+
+ build_time = datetime.fromisoformat(labels.get('rbuild.build_time'))
+ delta = datetime.utcnow() - build_time
+
+ return delta.total_seconds() > build_ttl
+
+def remove_images(compose_name : str, operating_images : Set[str] = set()):
+ """ Remove images created by rbuild, provided operating_images it purges stale images"""
+ stale_images = []
+
+ image_list_output = run_command(['docker', 'image', 'list', '--format', 'json'])
+ for image_output in image_list_output.split('\n'):
+
+ if not image_output:
+ continue
+
+ img = json.loads(image_output)
+
+ image_inspect_output = run_command(['docker', 'inspect', img['ID']])
+
+ image_details = json.loads(image_inspect_output)[0]
+
+ labels = image_details['Config']['Labels']
+
+ image_name = labels.get('rbuild.compose_name') if labels else None
+
+ if image_name != compose_name or \
+ any((tag in operating_images for tag in image_details['RepoTags'])):
+ continue
+
+ stale_images.append(img['ID'])
+
+ if stale_images:
+ run_command(['docker', 'image', 'rm', *stale_images])
+
+def read_config(filename : str) -> Tuple[str, Dict]:
+ """ Read the config from disk """
+
+ config_output = run_command(['docker', 'compose', '-f', filename, 'config', '--format', 'json'])
+
+ config = json.loads(config_output)
+
+ return config_output, config
+
+def build_main(filename: str, force_rebuild=False) -> None:
+ config_output, config = read_config(filename)
+
+ config_sha256 = hashlib.sha256(config_output.encode()).hexdigest()
+
+ name = config.get('name')
+
+ # 2. Determine if working images are expired
+ ps_output = run_command(['docker', 'compose', '-f', filename, 'ps', '--all', '--format', 'json'])
+ containers = json.loads(ps_output)
+
+ any_expired = False
+
+ for container in containers:
+ image = container.get('Image')
+ if is_image_expired(image, config_sha256, BUILD_TTL):
+ any_expired = True
+ break
+
+ if not (force_rebuild or any_expired or len(containers) == 0):
+ exit(0)
+
+ # 3. Modify config
+ build_time = datetime.utcnow()
+ operating_images = set()
+ for service_name, service_data in config['services'].items():
+ labels = service_data.setdefault('build', {}).setdefault('labels', {})
+
+ labels['rbuild.config_sha256'] = config_sha256
+ labels['rbuild.build_time'] = build_time.isoformat()
+ labels['rbuild.compose_name'] = name
+
+ new_image = f'rbuild-{name}-{service_name}:{build_time.timestamp()}'
+ operating_images.add(new_image)
+ service_data['image'] = new_image
+
+ # 4. Save this config to a temporary JSON file
+ with tempfile.NamedTemporaryFile(mode='w+', suffix='.json') as temp_file:
+ json.dump(config, temp_file)
+
+ temp_file.flush()
+
+ # 5. Build and bring the dockerfile up
+ subprocess.run(['docker', 'compose', '-f', temp_file.name, 'build', '--no-cache', '--pull'], check=True)
+ subprocess.run([
+ 'docker', 'compose', '-f', temp_file.name,
+ 'up', '--remove-orphans', '--detach', f'--wait-timeout={UP_TIMEOUT_PERIOD}'
+ ], check=True)
+
+ # 6. Get rid of stale images
+ remove_images(name, operating_images)
+
+def remove_main(filename : str) -> None:
+ """ Remove all images """
+
+ _, config = read_config(filename)
+ compose_name = config['name']
+ remove_images(compose_name)
+ sys.exit(0)
+
+if __name__ == '__main__':
+ BUILD_TTL = parse_env_var_to_int('BUILD_TTL', default=(24 * 60 * 60))
+ UP_TIMEOUT_PERIOD = parse_env_var_to_int('UP_TIMEOUT_PERIOD', default=60)
+
+ parser = argparse.ArgumentParser(description='Automatically rebuild a series of containers with docker compose.')
+
+ parser.add_argument('filename', type=str, help='The docker-compose file to use.')
+
+ parser.add_argument('--build-period', type=int, default=BUILD_TTL, help='Time images are allowed to live (in seconds.)')
+ parser.add_argument('--up-timeout-period', type=int, default=UP_TIMEOUT_PERIOD, help='Up timeout period in seconds.')
+ parser.add_argument('--force-rebuild', default=False, action='store_true', help='Force all containers to be rebuilt')
+
+ # Add remove-images to the mutually exclusive group
+ parser.add_argument('--remove-images', default=False, action='store_true', help='Remove all images')
+
+ try:
+ import argcomplete
+ argcomplete.autocomplete(parser)
+ except ImportError:
+ pass
+
+ args = parser.parse_args()
+
+ # Check for mutual exclusivity
+ if args.remove_images and args.force_rebuild:
+ die("Error: --remove-images cannot be used with --force-rebuild")
+
+ if args.remove_images:
+ remove_main(args.filename)
+
+ build_main(args.filename, force_rebuild=args.force_rebuild)
+