mirror of
https://github.com/netbox-community/netbox.git
synced 2025-07-13 08:44:51 -06:00
Compare commits
4 Commits
f99aebf437
...
5ee3afb9cd
Author | SHA1 | Date | |
---|---|---|---|
![]() |
5ee3afb9cd | ||
![]() |
875a641687 | ||
![]() |
453ca57225 | ||
![]() |
b882fc9f67 |
@ -1,17 +0,0 @@
|
||||
[Unit]
|
||||
Description=NetBox Housekeeping Service
|
||||
Documentation=https://docs.netbox.dev/
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
|
||||
User=netbox
|
||||
Group=netbox
|
||||
WorkingDirectory=/opt/netbox
|
||||
|
||||
ExecStart=/opt/netbox/venv/bin/python /opt/netbox/netbox/manage.py housekeeping
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
@ -1,9 +0,0 @@
|
||||
#!/bin/sh
|
||||
# This shell script invokes NetBox's housekeeping management command, which
|
||||
# intended to be run nightly. This script can be copied into your system's
|
||||
# daily cron directory (e.g. /etc/cron.daily), or referenced directly from
|
||||
# within the cron configuration file.
|
||||
#
|
||||
# If NetBox has been installed into a nonstandard location, update the paths
|
||||
# below.
|
||||
/opt/netbox/venv/bin/python /opt/netbox/netbox/manage.py housekeeping
|
@ -1,13 +0,0 @@
|
||||
[Unit]
|
||||
Description=NetBox Housekeeping Timer
|
||||
Documentation=https://docs.netbox.dev/
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Timer]
|
||||
OnCalendar=daily
|
||||
AccuracySec=1h
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
@ -1,49 +0,0 @@
|
||||
# Housekeeping
|
||||
|
||||
NetBox includes a `housekeeping` management command that should be run nightly. This command handles:
|
||||
|
||||
* Clearing expired authentication sessions from the database
|
||||
* Deleting changelog records older than the configured [retention time](../configuration/miscellaneous.md#changelog_retention)
|
||||
* Deleting job result records older than the configured [retention time](../configuration/miscellaneous.md#job_retention)
|
||||
* Check for new NetBox releases (if [`RELEASE_CHECK_URL`](../configuration/miscellaneous.md#release_check_url) is set)
|
||||
|
||||
This command can be invoked directly, or by using the shell script provided at `/opt/netbox/contrib/netbox-housekeeping.sh`.
|
||||
|
||||
## Scheduling
|
||||
|
||||
### Using Cron
|
||||
|
||||
This script can be linked from your cron scheduler's daily jobs directory (e.g. `/etc/cron.daily`) or referenced directly within the cron configuration file.
|
||||
|
||||
```shell
|
||||
sudo ln -s /opt/netbox/contrib/netbox-housekeeping.sh /etc/cron.daily/netbox-housekeeping
|
||||
```
|
||||
|
||||
!!! note
|
||||
On Debian-based systems, be sure to omit the `.sh` file extension when linking to the script from within a cron directory. Otherwise, the task may not run.
|
||||
|
||||
### Using Systemd
|
||||
|
||||
First, create symbolic links for the systemd service and timer files. Link the existing service and timer files from the `/opt/netbox/contrib/` directory to the `/etc/systemd/system/` directory:
|
||||
|
||||
```bash
|
||||
sudo ln -s /opt/netbox/contrib/netbox-housekeeping.service /etc/systemd/system/netbox-housekeeping.service
|
||||
sudo ln -s /opt/netbox/contrib/netbox-housekeeping.timer /etc/systemd/system/netbox-housekeeping.timer
|
||||
```
|
||||
|
||||
Then, reload the systemd configuration and enable the timer to start automatically at boot:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now netbox-housekeeping.timer
|
||||
```
|
||||
|
||||
Check the status of your timer by running:
|
||||
|
||||
```bash
|
||||
sudo systemctl list-timers --all
|
||||
```
|
||||
|
||||
This command will show a list of all timers, including your `netbox-housekeeping.timer`. Make sure the timer is active and properly scheduled.
|
||||
|
||||
That's it! Your NetBox housekeeping service is now configured to run daily using systemd.
|
@ -264,18 +264,6 @@ cd /opt/netbox/netbox
|
||||
python3 manage.py createsuperuser
|
||||
```
|
||||
|
||||
## Schedule the Housekeeping Task
|
||||
|
||||
NetBox includes a `housekeeping` management command that handles some recurring cleanup tasks, such as clearing out old sessions and expired change records. Although this command may be run manually, it is recommended to configure a scheduled job using the system's `cron` daemon or a similar utility.
|
||||
|
||||
A shell script which invokes this command is included at `contrib/netbox-housekeeping.sh`. It can be copied to or linked from your system's daily cron task directory, or included within the crontab directly. (If installing NetBox into a nonstandard path, be sure to update the system paths within this script first.)
|
||||
|
||||
```shell
|
||||
sudo ln -s /opt/netbox/contrib/netbox-housekeeping.sh /etc/cron.daily/netbox-housekeeping
|
||||
```
|
||||
|
||||
See the [housekeeping documentation](../administration/housekeeping.md) for further details.
|
||||
|
||||
## Test the Application
|
||||
|
||||
At this point, we should be able to run NetBox's development server for testing. We can check by starting a development instance locally.
|
||||
|
@ -183,13 +183,3 @@ Finally, restart the gunicorn and RQ services:
|
||||
```no-highlight
|
||||
sudo systemctl restart netbox netbox-rq
|
||||
```
|
||||
|
||||
## 6. Verify Housekeeping Scheduling
|
||||
|
||||
If upgrading from a release prior to NetBox v3.0, check that a cron task (or similar scheduled process) has been configured to run NetBox's nightly housekeeping command. A shell script which invokes this command is included at `contrib/netbox-housekeeping.sh`. It can be linked from your system's daily cron task directory, or included within the crontab directly. (If NetBox has been installed in a nonstandard path, be sure to update the system paths within this script first.)
|
||||
|
||||
```shell
|
||||
sudo ln -s /opt/netbox/contrib/netbox-housekeeping.sh /etc/cron.daily/netbox-housekeeping
|
||||
```
|
||||
|
||||
See the [housekeeping documentation](../administration/housekeeping.md) for further details.
|
||||
|
@ -434,7 +434,7 @@ A new management command has been added: `manage.py housekeeping`. This command
|
||||
* Delete change log records which have surpassed the configured retention period (if configured)
|
||||
* Check for new NetBox releases (if enabled)
|
||||
|
||||
A convenience script for calling this command via an automated scheduler has been included at `/contrib/netbox-housekeeping.sh`. Please see the [housekeeping documentation](../administration/housekeeping.md) for further details.
|
||||
A convenience script for calling this command via an automated scheduler has been included at `/contrib/netbox-housekeeping.sh`. Please see the housekeeping documentation for further details.
|
||||
|
||||
#### Custom Queue Support for Plugins ([#6651](https://github.com/netbox-community/netbox/issues/6651))
|
||||
|
||||
|
@ -158,7 +158,6 @@ nav:
|
||||
- Okta: 'administration/authentication/okta.md'
|
||||
- Permissions: 'administration/permissions.md'
|
||||
- Error Reporting: 'administration/error-reporting.md'
|
||||
- Housekeeping: 'administration/housekeeping.md'
|
||||
- Replicating NetBox: 'administration/replicating-netbox.md'
|
||||
- NetBox Shell: 'administration/netbox-shell.md'
|
||||
- Data Model:
|
||||
|
@ -1,8 +1,16 @@
|
||||
import logging
|
||||
import requests
|
||||
import sys
|
||||
from datetime import timedelta
|
||||
from importlib import import_module
|
||||
|
||||
import requests
|
||||
from django.conf import settings
|
||||
from django.core.cache import cache
|
||||
from django.utils import timezone
|
||||
from packaging import version
|
||||
|
||||
from core.models import Job, ObjectChange
|
||||
from netbox.config import Config
|
||||
from netbox.jobs import JobRunner, system_job
|
||||
from netbox.search.backends import search_backend
|
||||
from utilities.proxy import resolve_proxies
|
||||
@ -50,16 +58,23 @@ class SystemHousekeepingJob(JobRunner):
|
||||
if settings.DEBUG or 'test' in sys.argv:
|
||||
return
|
||||
|
||||
# TODO: Migrate other housekeeping functions from the `housekeeping` management command.
|
||||
self.send_census_report()
|
||||
self.clear_expired_sessions()
|
||||
self.prune_changelog()
|
||||
self.delete_expired_jobs()
|
||||
self.check_for_new_releases()
|
||||
|
||||
@staticmethod
|
||||
def send_census_report():
|
||||
"""
|
||||
Send a census report (if enabled).
|
||||
"""
|
||||
# Skip if census reporting is disabled
|
||||
if settings.ISOLATED_DEPLOYMENT or not settings.CENSUS_REPORTING_ENABLED:
|
||||
logging.info("Reporting census data...")
|
||||
if settings.ISOLATED_DEPLOYMENT:
|
||||
logging.info("ISOLATED_DEPLOYMENT is enabled; skipping")
|
||||
return
|
||||
if not settings.CENSUS_REPORTING_ENABLED:
|
||||
logging.info("CENSUS_REPORTING_ENABLED is disabled; skipping")
|
||||
return
|
||||
|
||||
census_data = {
|
||||
@ -76,3 +91,94 @@ class SystemHousekeepingJob(JobRunner):
|
||||
)
|
||||
except requests.exceptions.RequestException:
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def clear_expired_sessions():
|
||||
"""
|
||||
Clear any expired sessions from the database.
|
||||
"""
|
||||
logging.info("Clearing expired sessions...")
|
||||
engine = import_module(settings.SESSION_ENGINE)
|
||||
try:
|
||||
engine.SessionStore.clear_expired()
|
||||
logging.info("Sessions cleared.")
|
||||
except NotImplementedError:
|
||||
logging.warning(
|
||||
f"The configured session engine ({settings.SESSION_ENGINE}) does not support "
|
||||
f"clearing sessions; skipping."
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def prune_changelog():
|
||||
"""
|
||||
Delete any ObjectChange records older than the configured changelog retention time (if any).
|
||||
"""
|
||||
logging.info("Pruning old changelog entries...")
|
||||
config = Config()
|
||||
if not config.CHANGELOG_RETENTION:
|
||||
logging.info("No retention period specified; skipping.")
|
||||
return
|
||||
|
||||
cutoff = timezone.now() - timedelta(days=config.CHANGELOG_RETENTION)
|
||||
logging.debug(f"Retention period: {config.CHANGELOG_RETENTION} days")
|
||||
logging.debug(f"Cut-off time: {cutoff}")
|
||||
|
||||
count = ObjectChange.objects.filter(time__lt=cutoff).delete()[0]
|
||||
logging.info(f"Deleted {count} expired records")
|
||||
|
||||
@staticmethod
|
||||
def delete_expired_jobs():
|
||||
"""
|
||||
Delete any jobs older than the configured retention period (if any).
|
||||
"""
|
||||
logging.info("Deleting expired jobs...")
|
||||
config = Config()
|
||||
if not config.JOB_RETENTION:
|
||||
logging.info("No retention period specified; skipping.")
|
||||
return
|
||||
|
||||
cutoff = timezone.now() - timedelta(days=config.JOB_RETENTION)
|
||||
logging.debug(f"Retention period: {config.CHANGELOG_RETENTION} days")
|
||||
logging.debug(f"Cut-off time: {cutoff}")
|
||||
|
||||
count = Job.objects.filter(created__lt=cutoff).delete()[0]
|
||||
logging.info(f"Deleted {count} expired records")
|
||||
|
||||
@staticmethod
|
||||
def check_for_new_releases():
|
||||
"""
|
||||
Check for new releases and cache the latest release.
|
||||
"""
|
||||
logging.info("Checking for new releases...")
|
||||
if settings.ISOLATED_DEPLOYMENT:
|
||||
logging.info("ISOLATED_DEPLOYMENT is enabled; skipping")
|
||||
return
|
||||
if not settings.RELEASE_CHECK_URL:
|
||||
logging.info("RELEASE_CHECK_URL is not set; skipping")
|
||||
return
|
||||
|
||||
# Fetch the latest releases
|
||||
logging.debug(f"Release check URL: {settings.RELEASE_CHECK_URL}")
|
||||
try:
|
||||
response = requests.get(
|
||||
url=settings.RELEASE_CHECK_URL,
|
||||
headers={'Accept': 'application/vnd.github.v3+json'},
|
||||
proxies=resolve_proxies(url=settings.RELEASE_CHECK_URL)
|
||||
)
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as exc:
|
||||
logging.error(f"Error fetching release: {exc}")
|
||||
return
|
||||
|
||||
# Determine the most recent stable release
|
||||
releases = []
|
||||
for release in response.json():
|
||||
if 'tag_name' not in release or release.get('devrelease') or release.get('prerelease'):
|
||||
continue
|
||||
releases.append((version.parse(release['tag_name']), release.get('html_url')))
|
||||
latest_release = max(releases)
|
||||
logging.debug(f"Found {len(response.json())} releases; {len(releases)} usable")
|
||||
logging.info(f"Latest release: {latest_release[0]}")
|
||||
|
||||
# Cache the most recent release
|
||||
cache.set('latest_release', latest_release, None)
|
||||
|
@ -116,7 +116,7 @@ class Job(models.Model):
|
||||
verbose_name_plural = _('jobs')
|
||||
|
||||
def __str__(self):
|
||||
return str(self.job_id)
|
||||
return self.name
|
||||
|
||||
def get_absolute_url(self):
|
||||
# TODO: Employ dynamic registration
|
||||
|
@ -14,9 +14,16 @@ from utilities.proxy import resolve_proxies
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Perform nightly housekeeping tasks. (This command can be run at any time.)"
|
||||
help = "Perform nightly housekeeping tasks [DEPRECATED]"
|
||||
|
||||
def handle(self, *args, **options):
|
||||
self.stdout.write(
|
||||
"Running this command is no longer necessary: All housekeeping tasks\n"
|
||||
"are addressed automatically via NetBox's built-in job scheduler. It\n"
|
||||
"will be removed in a future release.",
|
||||
self.style.WARNING
|
||||
)
|
||||
|
||||
config = Config()
|
||||
|
||||
# Clear expired authentication sessions (essentially replicating the `clearsessions` command)
|
||||
|
@ -8,11 +8,15 @@ from django_pglocks import advisory_lock
|
||||
from rq.timeouts import JobTimeoutException
|
||||
|
||||
from core.choices import JobStatusChoices
|
||||
from core.events import JOB_COMPLETED, JOB_FAILED
|
||||
from core.models import Job, ObjectType
|
||||
from extras.models import Notification
|
||||
from netbox.constants import ADVISORY_LOCK_KEYS
|
||||
from netbox.registry import registry
|
||||
from utilities.request import apply_request_processors
|
||||
|
||||
__all__ = (
|
||||
'AsyncViewJob',
|
||||
'JobRunner',
|
||||
'system_job',
|
||||
)
|
||||
@ -154,3 +158,35 @@ class JobRunner(ABC):
|
||||
job.delete()
|
||||
|
||||
return cls.enqueue(instance=instance, schedule_at=schedule_at, interval=interval, *args, **kwargs)
|
||||
|
||||
|
||||
class AsyncViewJob(JobRunner):
|
||||
"""
|
||||
Execute a view as a background job.
|
||||
"""
|
||||
class Meta:
|
||||
name = 'Async View'
|
||||
|
||||
def run(self, view_cls, request, **kwargs):
|
||||
view = view_cls.as_view()
|
||||
|
||||
# Apply all registered request processors (e.g. event_tracking)
|
||||
with apply_request_processors(request):
|
||||
data = view(request)
|
||||
|
||||
self.job.data = {
|
||||
'log': data.log,
|
||||
'errors': data.errors,
|
||||
}
|
||||
|
||||
# Notify the user
|
||||
notification = Notification(
|
||||
user=request.user,
|
||||
object=self.job,
|
||||
event_type=JOB_COMPLETED if not data.errors else JOB_FAILED,
|
||||
)
|
||||
notification.save()
|
||||
|
||||
# TODO: Waiting on fix for bug #19806
|
||||
# if errors:
|
||||
# raise JobFailed()
|
||||
|
@ -1,8 +1,5 @@
|
||||
from contextlib import ExitStack
|
||||
|
||||
import logging
|
||||
import uuid
|
||||
import warnings
|
||||
|
||||
from django.conf import settings
|
||||
from django.contrib import auth, messages
|
||||
@ -13,10 +10,10 @@ from django.db.utils import InternalError
|
||||
from django.http import Http404, HttpResponseRedirect
|
||||
|
||||
from netbox.config import clear_config, get_config
|
||||
from netbox.registry import registry
|
||||
from netbox.views import handler_500
|
||||
from utilities.api import is_api_request
|
||||
from utilities.error_handlers import handle_rest_api_exception
|
||||
from utilities.request import apply_request_processors
|
||||
|
||||
__all__ = (
|
||||
'CoreMiddleware',
|
||||
@ -36,12 +33,7 @@ class CoreMiddleware:
|
||||
request.id = uuid.uuid4()
|
||||
|
||||
# Apply all registered request processors
|
||||
with ExitStack() as stack:
|
||||
for request_processor in registry['request_processors']:
|
||||
try:
|
||||
stack.enter_context(request_processor(request))
|
||||
except Exception as e:
|
||||
warnings.warn(f'Failed to initialize request processor {request_processor}: {e}')
|
||||
with apply_request_processors(request):
|
||||
response = self.get_response(request)
|
||||
|
||||
# Check if language cookie should be renewed
|
||||
|
@ -28,6 +28,7 @@ from utilities.export import TableExport
|
||||
from utilities.forms import BulkRenameForm, ConfirmationForm, restrict_form_fields
|
||||
from utilities.forms.bulk_import import BulkImportForm
|
||||
from utilities.htmx import htmx_partial
|
||||
from utilities.jobs import AsyncJobData, is_background_request, process_request_as_job
|
||||
from utilities.permissions import get_permission_for_model
|
||||
from utilities.query import reapply_model_ordering
|
||||
from utilities.request import safe_for_redirect
|
||||
@ -503,25 +504,32 @@ class BulkImportView(GetReturnURLMixin, BaseMultiObjectView):
|
||||
|
||||
if form.is_valid():
|
||||
logger.debug("Import form validation was successful")
|
||||
redirect_url = reverse(get_viewname(model, action='list'))
|
||||
new_objects = []
|
||||
|
||||
# If indicated, defer this request to a background job & redirect the user
|
||||
if form.cleaned_data['background_job']:
|
||||
job_name = _('Bulk import {count} {object_type}').format(
|
||||
count=len(form.cleaned_data['data']),
|
||||
object_type=model._meta.verbose_name_plural,
|
||||
)
|
||||
if job := process_request_as_job(self.__class__, request, name=job_name):
|
||||
msg = _('Created background job {job.pk}: <a href="{url}">{job.name}</a>').format(
|
||||
url=job.get_absolute_url(),
|
||||
job=job
|
||||
)
|
||||
messages.info(request, mark_safe(msg))
|
||||
return redirect(redirect_url)
|
||||
|
||||
try:
|
||||
# Iterate through data and bind each record to a new model form instance.
|
||||
with transaction.atomic(using=router.db_for_write(model)):
|
||||
new_objs = self.create_and_update_objects(form, request)
|
||||
new_objects = self.create_and_update_objects(form, request)
|
||||
|
||||
# Enforce object-level permissions
|
||||
if self.queryset.filter(pk__in=[obj.pk for obj in new_objs]).count() != len(new_objs):
|
||||
if self.queryset.filter(pk__in=[obj.pk for obj in new_objects]).count() != len(new_objects):
|
||||
raise PermissionsViolation
|
||||
|
||||
if new_objs:
|
||||
msg = f"Imported {len(new_objs)} {model._meta.verbose_name_plural}"
|
||||
logger.info(msg)
|
||||
messages.success(request, msg)
|
||||
|
||||
view_name = get_viewname(model, action='list')
|
||||
results_url = f"{reverse(view_name)}?modified_by_request={request.id}"
|
||||
return redirect(results_url)
|
||||
|
||||
except (AbortTransaction, ValidationError):
|
||||
clear_events.send(sender=self)
|
||||
|
||||
@ -530,6 +538,25 @@ class BulkImportView(GetReturnURLMixin, BaseMultiObjectView):
|
||||
form.add_error(None, e.message)
|
||||
clear_events.send(sender=self)
|
||||
|
||||
# If this request was executed via a background job, return the raw data for logging
|
||||
if is_background_request(request):
|
||||
return AsyncJobData(
|
||||
log=[
|
||||
_('Created {object}').format(object=str(obj))
|
||||
for obj in new_objects
|
||||
],
|
||||
errors=form.errors
|
||||
)
|
||||
|
||||
if new_objects:
|
||||
msg = _("Imported {count} {object_type}").format(
|
||||
count=len(new_objects),
|
||||
object_type=model._meta.verbose_name_plural
|
||||
)
|
||||
logger.info(msg)
|
||||
messages.success(request, msg)
|
||||
return redirect(f"{redirect_url}?modified_by_request={request.id}")
|
||||
|
||||
else:
|
||||
logger.debug("Form validation failed")
|
||||
|
||||
|
@ -50,6 +50,7 @@ Context:
|
||||
{% render_field form.data %}
|
||||
{% render_field form.format %}
|
||||
{% render_field form.csv_delimiter %}
|
||||
{% render_field form.background_job %}
|
||||
<div class="form-group">
|
||||
<div class="col col-md-12 text-end">
|
||||
{% if return_url %}
|
||||
@ -94,6 +95,7 @@ Context:
|
||||
{% render_field form.data_file %}
|
||||
{% render_field form.format %}
|
||||
{% render_field form.csv_delimiter %}
|
||||
{% render_field form.background_job %}
|
||||
<div class="form-group">
|
||||
<div class="col col-md-12 text-end">
|
||||
{% if return_url %}
|
||||
|
@ -37,6 +37,11 @@ class BulkImportForm(SyncedDataMixin, forms.Form):
|
||||
help_text=_("The character which delimits CSV fields. Applies only to CSV format."),
|
||||
required=False
|
||||
)
|
||||
background_job = forms.BooleanField(
|
||||
label=_('Background job'),
|
||||
help_text=_("Enqueue a background job to complete the bulk import/update."),
|
||||
required=False,
|
||||
)
|
||||
|
||||
data_field = 'data'
|
||||
|
||||
|
46
netbox/utilities/jobs.py
Normal file
46
netbox/utilities/jobs.py
Normal file
@ -0,0 +1,46 @@
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
from netbox.jobs import AsyncViewJob
|
||||
from utilities.request import copy_safe_request
|
||||
|
||||
__all__ = (
|
||||
'AsyncJobData',
|
||||
'is_background_request',
|
||||
'process_request_as_job',
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class AsyncJobData:
|
||||
log: List[str]
|
||||
errors: List[str]
|
||||
|
||||
|
||||
def is_background_request(request):
|
||||
"""
|
||||
Return True if the request is being processed as a background job.
|
||||
"""
|
||||
return getattr(request, '_background', False)
|
||||
|
||||
|
||||
def process_request_as_job(view, request, name=None):
|
||||
"""
|
||||
Process a request using a view as a background job.
|
||||
"""
|
||||
|
||||
# Check that the request that is not already being processed as a background job (would be a loop)
|
||||
if is_background_request(request):
|
||||
return
|
||||
|
||||
# Create a serializable copy of the original request
|
||||
request_copy = copy_safe_request(request)
|
||||
request_copy._background = True
|
||||
|
||||
# Enqueue a job to perform the work in the background
|
||||
return AsyncViewJob.enqueue(
|
||||
name=name,
|
||||
user=request.user,
|
||||
view_cls=view,
|
||||
request=request_copy,
|
||||
)
|
@ -1,13 +1,17 @@
|
||||
import warnings
|
||||
from contextlib import ExitStack, contextmanager
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from django.utils.http import url_has_allowed_host_and_scheme
|
||||
from django.utils.translation import gettext_lazy as _
|
||||
from netaddr import AddrFormatError, IPAddress
|
||||
|
||||
from netbox.registry import registry
|
||||
from .constants import HTTP_REQUEST_META_SAFE_COPY
|
||||
|
||||
__all__ = (
|
||||
'NetBoxFakeRequest',
|
||||
'apply_request_processors',
|
||||
'copy_safe_request',
|
||||
'get_client_ip',
|
||||
'safe_for_redirect',
|
||||
@ -48,6 +52,7 @@ def copy_safe_request(request):
|
||||
'GET': request.GET,
|
||||
'FILES': request.FILES,
|
||||
'user': request.user,
|
||||
'method': request.method,
|
||||
'path': request.path,
|
||||
'id': getattr(request, 'id', None), # UUID assigned by middleware
|
||||
})
|
||||
@ -87,3 +92,17 @@ def safe_for_redirect(url):
|
||||
Returns True if the given URL is safe to use as an HTTP redirect; otherwise returns False.
|
||||
"""
|
||||
return url_has_allowed_host_and_scheme(url, allowed_hosts=None)
|
||||
|
||||
|
||||
@contextmanager
|
||||
def apply_request_processors(request):
|
||||
"""
|
||||
A context manager with applies all registered request processors (such as event_tracking).
|
||||
"""
|
||||
with ExitStack() as stack:
|
||||
for request_processor in registry['request_processors']:
|
||||
try:
|
||||
stack.enter_context(request_processor(request))
|
||||
except Exception as e:
|
||||
warnings.warn(f'Failed to initialize request processor {request_processor.__name__}: {e}')
|
||||
yield
|
||||
|
Loading…
Reference in New Issue
Block a user