Merge in recent feture changes

This commit is contained in:
Daniel Sheppard 2024-02-12 09:44:30 -06:00
commit 4ae6683abc
189 changed files with 32068 additions and 5836 deletions

View File

@ -23,7 +23,7 @@ body:
attributes:
label: NetBox Version
description: What version of NetBox are you currently running?
placeholder: v3.6.9
placeholder: v3.7.2
validations:
required: true
- type: dropdown

View File

@ -7,6 +7,9 @@ contact_links:
- name: ❓ Discussion
url: https://github.com/netbox-community/netbox/discussions
about: "If you're just looking for help, try starting a discussion instead."
- name: 🌎 Correct a Translation
url: https://explore.transifex.com/netbox-community/netbox/
about: "Spot an incorrect translation? You can propose a fix on Transifex."
- name: 💡 Plugin Idea
url: https://plugin-ideas.netbox.dev
about: "Have an idea for a plugin? Head over to the ideas board!"

View File

@ -14,7 +14,7 @@ body:
attributes:
label: NetBox version
description: What version of NetBox are you currently running?
placeholder: v3.6.9
placeholder: v3.7.2
validations:
required: true
- type: dropdown

View File

@ -68,6 +68,9 @@ jobs:
- name: Collect static files
run: python netbox/manage.py collectstatic --no-input
- name: Check for missing migrations
run: python netbox/manage.py makemigrations --check
- name: Check PEP8 compliance
run: pycodestyle --ignore=W504,E501 --exclude=node_modules netbox/

View File

@ -9,13 +9,15 @@ on:
permissions:
issues: write
pull-requests: write
discussions: write
jobs:
lock:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v4
- uses: dessant/lock-threads@v5
with:
issue-inactive-days: 90
pr-inactive-days: 30
discussion-inactive-days: 180
issue-lock-reason: 'resolved'

View File

@ -86,12 +86,16 @@ intake policy](https://github.com/netbox-community/netbox/wiki/Issue-Intake-Poli
* In most cases, it is not necessary to add a changelog entry: A maintainer will take care of this when the PR is merged. (This helps avoid merge conflicts resulting from multiple PRs being submitted simultaneously.)
* All code submissions should meet the following criteria (CI will enforce these checks):
* All code submissions must meet the following criteria (CI will enforce these checks where feasible):
* Consist entirely of original work
* Python syntax is valid
* All tests pass when run with `./manage.py test`
* PEP 8 compliance is enforced, with the exception that lines may be
greater than 80 characters in length
> [!CAUTION]
> Any contributions which include AI-generated or reproduced content will be rejected.
* Some other tips to keep in mind:
* If you'd like to volunteer for someone else's issue, please post a comment on that issue letting us know. (This will allow the maintainers to assign it to you.)
* Check out our [developer docs](https://docs.netbox.dev/en/stable/development/getting-started/) for tips on setting up your development environment.
@ -117,8 +121,6 @@ We're always looking for motivated individuals to join the maintainers team and
We generally ask that maintainers dedicate around four hours of work to the project each week on average, which includes both hands-on development and project management tasks such as issue triage. Maintainers are also encouraged (but not required) to attend our bi-weekly Zoom call to catch up on recent items.
Many maintainers petition their employer to grant some of their paid time to work on NetBox. In doing so, your employer becomes eligible to be featured as a [NetBox sponsor](https://github.com/netbox-community/netbox/wiki/Sponsorship).
Interested? You can contact our lead maintainer, Jeremy Stretch, at jeremy@netbox.dev or on the [NetDev Community Slack](https://netdev.chat/). We'd love to have you on the team!
## :heart: Other Ways to Contribute

153
README.md
View File

@ -1,86 +1,129 @@
<div align="center">
<img src="https://raw.githubusercontent.com/netbox-community/netbox/develop/docs/netbox_logo.svg" width="400" alt="NetBox logo" />
<p>The premier source of truth powering network automation</p>
<img src="https://github.com/netbox-community/netbox/workflows/CI/badge.svg?branch=master" alt="CI status" />
<p><strong>The cornerstone of every automated network</strong></p>
<a href="https://github.com/netbox-community/netbox/releases"><img src="https://img.shields.io/github/v/release/netbox-community/netbox" alt="Latest release" /></a>
<a href="https://github.com/netbox-community/netbox/blob/master/LICENSE.txt"><img src="https://img.shields.io/badge/license-Apache_2.0-blue.svg" alt="License" /></a>
<a href="https://github.com/netbox-community/netbox/graphs/contributors"><img src="https://img.shields.io/github/contributors/netbox-community/netbox?color=blue" alt="Contributors" /></a>
<a href="https://github.com/netbox-community/netbox/stargazers"><img src="https://img.shields.io/github/stars/netbox-community/netbox?style=flat" alt="GitHub stars" /></a>
<a href="https://explore.transifex.com/netbox-community/netbox/"><img src="https://img.shields.io/badge/languages-6-blue" alt="Languages supported" /></a>
<a href="https://github.com/netbox-community/netbox/actions/workflows/ci.yml"><img src="https://github.com/netbox-community/netbox/workflows/CI/badge.svg?branch=master" alt="CI status" /></a>
<p></p>
</div>
NetBox is the leading solution for modeling and documenting modern networks. By
combining the traditional disciplines of IP address management (IPAM) and
datacenter infrastructure management (DCIM) with powerful APIs and extensions,
NetBox provides the ideal "source of truth" to power network automation.
Available as open source software under the Apache 2.0 license, NetBox serves
as the cornerstone for network automation in thousands of organizations.
NetBox exists to empower network engineers. Since its release in 2016, it has become the go-to solution for modeling and documenting network infrastructure for thousands of organizations worldwide. As a successor to legacy IPAM and DCIM applications, NetBox provides a cohesive, extensive, and accessible data model for all things networked. By providing a single robust user interface and programmable APIs for everything from cable maps to device configurations, NetBox serves as the central source of truth for the modern network.
* **Physical infrastructure:** Accurately model the physical world, from global regions down to individual racks of gear. Then connect everything - network, console, and power!
* **Modern IPAM:** All the standard IPAM functionality you expect, plus VRF import/export tracking, VLAN management, and overlay support.
* **Data circuits:** Confidently manage the delivery of critical circuits from various service providers, modeled seamlessly alongside your own infrastructure.
* **Power tracking:** Map the distribution of power from upstream sources to individual feeds and outlets.
* **Organization:** Manage tenant and contact assignments natively.
* **Powerful search:** Easily find anything you need using a single global search function.
* **Comprehensive logging:** Leverage both automatic change logging and user-submitted journal entries to track your network's growth over time.
* **Endless customization:** Custom fields, custom links, tags, export templates, custom validation, reports, scripts, and more!
* **Flexible permissions:** An advanced permissions systems enables very flexible delegation of permissions.
* **Integrations:** Easily connect NetBox to your other tooling via its REST & GraphQL APIs.
* **Plugins:** Not finding what you need in the core application? Try one of many community plugins - or build your own!
<p align="center">
<a href="#netboxs-role">NetBox's Role</a> |
<a href="#why-netbox">Why NetBox?</a> |
<a href="#getting-started">Getting Started</a> |
<a href="#get-involved">Get Involved</a> |
<a href="#project-stats">Project Stats</a> |
<a href="#screenshots">Screenshots</a>
</p>
![Screenshot of NetBox UI](docs/media/screenshots/netbox-ui.png "NetBox UI")
<p align="center">
<img src="docs/media/screenshots/home-light.png" width="600" alt="NetBox user interface screenshot" />
</p>
## NetBox's Role
NetBox functions as the **source of truth** for your network infrastructure. Its job is to define and validate the _intended state_ of all network components and resources. NetBox does not interact with network nodes directly; rather, it makes this data available programmatically to purpose-built automation, monitoring, and assurance tools. This separation of duties enables the construction of a robust yet flexible automation system.
<p align="center">
<img src="docs/media/misc/reference_architecture.png" alt="Reference network automation architecture" />
</p>
The diagram above illustrates the recommended deployment architecture for an automated network, leveraging NetBox as the central authority for network state. This approach allows your team to swap out individual tools to meet changing needs while retaining a predictable, modular workflow.
## Why NetBox?
### Comprehensive Data Model
Racks, devices, cables, IP addresses, VLANs, circuits, power, VPNs, and lots more: NetBox is built for networks. Its comprehensive and thoroughly inter-linked data model provides for natural and highly structured modeling of myriad network primitives that just isn't possible using general-purpose tools. And there's no need to waste time contemplating how to build out a database: Everything is ready to go upon installation.
### Focused Development
NetBox strives to meet a singular goal: Provide the best available solution for making network infrastructure programmatically accessible. Unlike "all-in-one" tools which awkwardly bolt on half-baked features in an attempt to check every box, NetBox is committed to its core function. NetBox provides the best possible solution for modeling network infrastructure, and provides rich APIs for integrating with tools that excel in other areas of network automation.
### Extensible and Customizable
No two networks are exactly the same. Users are empowered to extend NetBox's native data model with custom fields and tags to best suit their unique needs. You can even write your own plugins to introduce entirely new objects and functionality!
### Flexible Permissions
NetBox includes a fully customizable permission system, which affords administrators incredible granularity when assigning roles to users and groups. Want to restrict certain users to working only with cabling and not be able to change IP addresses? Or maybe each team should have access only to a particular tenant? NetBox enables you to craft roles as you see fit.
### Custom Validation & Protection Rules
The data you put into NetBox is crucial to network operations. In addition to its robust native validation rules, NetBox provides mechanisms for administrators to define their own custom validation rules for objects. Custom validation can be used both to ensure new or modified objects adhere to a set of rules, and to prevent the deletion of objects which don't meet certain criteria. (For example, you might want to prevent the deletion of a device with an "active" status.)
### Device Configuration Rendering
NetBox can render user-created Jinja2 templates to generate device configurations from its own data. Configuration templates can be uploaded individually or pulled automatically from an external source, such as a git repository. Rendered configurations can be retrieved via the REST API for application directly to network devices via a provisioning tool such as Ansible or Salt.
### Custom Scripts
Complex workflows, such as provisioning a new branch office, can be tedious to carry out via the user interface. NetBox allows you to write and upload custom scripts that can be run directly from the UI. Scripts prompt users for input and then automate the necessary tasks to greatly simplify otherwise burdensome processes.
### Automated Events
Users can define event rules to automatically trigger a custom script or outbound webhook in response to a NetBox event. For example, you might want to automatically update a network monitoring service whenever a new device is added to NetBox, or update a DHCP server when an IP range is allocated.
### Comprehensive Change Logging
NetBox automatically logs the creation, modification, and deletion of all managed objects, providing a thorough change history. Changes can be attributed to the executing user, and related changes are grouped automatically by request ID.
> [!NOTE]
> A complete list of NetBox's myriad features can be found in [the introductory documentation](https://docs.netbox.dev/en/stable/introduction/).
## Getting Started
<div align="center">
[![NetBox logo](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/deploy/deploy1.png)](https://github.com/netbox-community/netbox)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![Docker logo](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/deploy/deploy2.png)](https://github.com/netbox-community/netbox-docker)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![NetBox Labs logo](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/deploy/deploy3.png)](https://netboxlabs.com/netbox-cloud/)
</div>
* Just want to explore? Check out [our public demo](https://demo.netbox.dev/) right now!
* The [official documentation](https://docs.netbox.dev) offers a comprehensive introduction.
* Check out [our wiki](https://github.com/netbox-community/netbox/wiki/Community-Contributions) for even more projects to get the most out of NetBox!
<p align="center">
<a href="https://netboxlabs.com/netbox-cloud/"><img src="docs/media/misc/netbox_cloud.png" alt="NetBox Cloud" /></a><br />
Looking for an enterprise solution? Check out <strong><a href="https://netboxlabs.com/netbox-cloud/">NetBox Cloud</a></strong>!
</p>
## Get Involved
* Follow [@NetBoxOfficial](https://twitter.com/NetBoxOfficial) on Twitter!
* Join the conversation on [the discussion forum](https://github.com/netbox-community/netbox/discussions) and [Slack](https://netdev.chat/)!
* Already a power user? You can [suggest a feature](https://github.com/netbox-community/netbox/issues/new?assignees=&labels=type%3A+feature&template=feature_request.yaml) or [report a bug](https://github.com/netbox-community/netbox/issues/new?assignees=&labels=type%3A+bug&template=bug_report.yaml) on GitHub.
* Contributions from the community are encouraged and appreciated! Check out our [contributing guide](CONTRIBUTING.md) to get started.
* [Share your idea](https://plugin-ideas.netbox.dev/) for a new plugin, or [learn how to build one](https://github.com/netbox-community/netbox-plugin-tutorial) yourself!
## Project Stats
<div align="center">
<p align="center">
<a href="https://github.com/netbox-community/netbox/commits"><img src="https://images.repography.com/29023055/netbox-community/netbox/recent-activity/whQtEr_TGD9PhW1BPlhlEQ5jnrgQ0KJpm-LlGtpoGO0/3Kx_iWUSBRJ5-AI4QwJEJWrUDEz3KrX2lvh8aYE0WXY_timeline.svg" alt="Timeline graph"></a>
<a href="https://github.com/netbox-community/netbox/issues"><img src="https://images.repography.com/29023055/netbox-community/netbox/recent-activity/whQtEr_TGD9PhW1BPlhlEQ5jnrgQ0KJpm-LlGtpoGO0/3Kx_iWUSBRJ5-AI4QwJEJWrUDEz3KrX2lvh8aYE0WXY_issues.svg" alt="Issues graph"></a>
<a href="https://github.com/netbox-community/netbox/pulls"><img src="https://images.repography.com/29023055/netbox-community/netbox/recent-activity/whQtEr_TGD9PhW1BPlhlEQ5jnrgQ0KJpm-LlGtpoGO0/3Kx_iWUSBRJ5-AI4QwJEJWrUDEz3KrX2lvh8aYE0WXY_prs.svg" alt="Pull requests graph"></a>
<a href="https://github.com/netbox-community/netbox/graphs/contributors"><img src="https://images.repography.com/29023055/netbox-community/netbox/recent-activity/whQtEr_TGD9PhW1BPlhlEQ5jnrgQ0KJpm-LlGtpoGO0/3Kx_iWUSBRJ5-AI4QwJEJWrUDEz3KrX2lvh8aYE0WXY_users.svg" alt="Top contributors"></a>
<br />Stats via <a href="https://repography.com">Repography</a>
</div>
## Sponsors
<div align="center">
[![NetBox Labs](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/netbox_labs.png)](https://netboxlabs.com)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![DigitalOcean](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/digitalocean.png)](https://try.digitalocean.com/developer-cloud)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![Sentry](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/sentry.png)](https://sentry.io)
<br />
[![Equinix Metal](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/equinix.png)](https://metal.equinix.com)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![OneMind Services](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/onemind_services.png)](https://onemindservices.com)
</div>
</p>
## Screenshots
![Screenshot of main page (dark mode)](docs/media/screenshots/home-dark.png "Main page (dark mode)")
![Screenshot of rack elevation](docs/media/screenshots/rack.png "Rack elevation")
![Screenshot of prefixes hierarchy](docs/media/screenshots/prefixes-list.png "Prefixes hierarchy")
![Screenshot of cable trace](docs/media/screenshots/cable-trace.png "Cable tracing")
<p align="center">
<strong>NetBox Dashboard (Light Mode)</strong><br />
<img src="docs/media/screenshots/home-light.png" width="600" alt="NetBox dashboard (light mode)" />
</p>
<p align="center">
<strong>NetBox Dashboard (Dark Mode)</strong><br />
<img src="docs/media/screenshots/home-dark.png" width="600" alt="NetBox dashboard (dark mode)" />
</p>
<p align="center">
<strong>Prefixes List</strong><br />
<img src="docs/media/screenshots/prefixes-list.png" width="600" alt="Prefixes list" />
</p>
<p align="center">
<strong>Rack View</strong><br />
<img src="docs/media/screenshots/rack.png" width="600" alt="Rack view" />
</p>
<p align="center">
<strong>Cable Trace</strong><br />
<img src="docs/media/screenshots/cable-trace.png" width="600" alt="Cable trace" />
</p>

View File

@ -73,7 +73,7 @@ You should be redirected to Microsoft's authentication portal. Enter the usernam
If successful, you will be redirected back to the NetBox UI, and will be logged in as the AD user. You can verify this by navigating to your profile (using the button at top right).
This user account has been replicated locally to NetBox, and can now be assigned groups and permissions within the NetBox admin UI.
This user account has been replicated locally to NetBox, and can now be assigned groups and permissions.
## Troubleshooting

View File

@ -67,4 +67,4 @@ You should be redirected to Okta's authentication portal. Enter the username/ema
If successful, you will be redirected back to the NetBox UI, and will be logged in as the Okta user. You can verify this by navigating to your profile (using the button at top right).
This user account has been replicated locally to NetBox, and can now be assigned groups and permissions within the NetBox admin UI.
This user account has been replicated locally to NetBox, and can now be assigned groups and permissions.

View File

@ -2,9 +2,9 @@
## Local Authentication
Local user accounts and groups can be created in NetBox under the "Authentication and Authorization" section of the administrative user interface. This interface is available only to users with the "staff" permission enabled.
Local user accounts and groups can be created in NetBox under the "Authentication" section in the "Admin" menu. This section is available only to users with the "staff" permission enabled.
At a minimum, each user account must have a username and password set. User accounts may also denote a first name, last name, and email address. [Permissions](../permissions.md) may also be assigned to users and/or groups within the admin UI.
At a minimum, each user account must have a username and password set. User accounts may also denote a first name, last name, and email address. [Permissions](../permissions.md) may also be assigned to individual users and/or groups as needed.
## Remote Authentication

View File

@ -10,6 +10,9 @@ The time zone NetBox will use when dealing with dates and times. It is recommend
You may define custom formatting for date and times. For detailed instructions on writing format strings, please see [the Django documentation](https://docs.djangoproject.com/en/stable/ref/templates/builtins/#date). Default formats are listed below.
!!! note
These system defaults will be overridden by a user's selected language/locale when [localization](./system.md#enable_localization) is enabled.
```python
DATE_FORMAT = 'N j, Y' # June 26, 2016
SHORT_DATE_FORMAT = 'Y-m-d' # 2016-06-26

View File

@ -46,4 +46,4 @@ The configuration file may be modified at any time. However, the WSGI service (e
$ sudo systemctl restart netbox
```
Configuration parameters which are set via the admin UI (those listed under "dynamic settings") take effect immediately.
Dynamic configuration parameters (those which can be modified via the UI) take effect immediately.

View File

@ -99,6 +99,14 @@ The maximum size (in bytes) of an incoming HTTP request (i.e. `GET` or `POST` da
---
## DJANGO_ADMIN_ENABLED
Default: False
Setting this to True installs the `django.contrib.admin` app and enables the [Django admin UI](https://docs.djangoproject.com/en/5.0/ref/contrib/admin/). This may be necessary to support older plugins which do not integrate with the native NetBox interface.
---
## ENFORCE_GLOBAL_UNIQUE
!!! tip "Dynamic Configuration Parameter"

View File

@ -69,15 +69,7 @@ Email is sent from NetBox only for critical events or if configured for [logging
Default: False
Determines if localization features are enabled or not. This should only be enabled for development or testing purposes as netbox is not yet fully localized. Turning this on will localize numeric and date formats (overriding what is set for DATE_FORMAT) based on the browser locale as well as translate certain strings from third party modules.
---
## GIT_PATH
Default: `git`
The system path to the `git` executable, used by the synchronization backend for remote git repositories.
Determines if localization features are enabled or not. This should only be enabled for development or testing purposes as netbox is not yet fully localized. Turning this on will localize numeric and date formats (overriding any configured [system defaults](./date-time.md#date-and-time-formatting)) based on the browser locale as well as translate certain strings from third party modules.
---

View File

@ -5,8 +5,17 @@ Custom scripting was introduced to provide a way for users to execute custom log
* Automatically populate new devices and cables in preparation for a new site deployment
* Create a range of new reserved prefixes or IP addresses
* Fetch data from an external source and import it to NetBox
* Update objects with invalid or incomplete data
Custom scripts are Python code and exist outside of the official NetBox code base, so they can be updated and changed without interfering with the core NetBox installation. And because they're completely custom, there is no inherent limitation on what a script can accomplish.
They can also be used as a mechanism for validating the integrity of data within NetBox. Script authors can define test to check object against specific rules and conditions. For example, you can write script to check that:
* All top-of-rack switches have a console connection
* Every router has a loopback interface with an IP address assigned
* Each interface description conforms to a standard format
* Every site has a minimum set of VLANs defined
* All IP addresses have a parent prefix
Custom scripts are Python code which exists outside the NetBox code base, so they can be updated and changed without interfering with the core NetBox installation. And because they're completely custom, there is no inherent limitation on what a script can accomplish.
## Writing Custom Scripts
@ -135,13 +144,73 @@ These two methods will load data in YAML or JSON format, respectively, from file
The Script object provides a set of convenient functions for recording messages at different severity levels:
* `log_debug`
* `log_success`
* `log_info`
* `log_warning`
* `log_failure`
* `log_debug(message, object=None)`
* `log_success(message, object=None)`
* `log_info(message, object=None)`
* `log_warning(message, object=None)`
* `log_failure(message, object=None)`
Log messages are returned to the user upon execution of the script. Markdown rendering is supported for log messages.
Log messages are returned to the user upon execution of the script. Markdown rendering is supported for log messages. A message may optionally be associated with a particular object by passing it as the second argument to the logging method.
## Test Methods
A script can define one or more test methods to report on certain conditions. All test methods must have a name beginning with `test_` and accept no arguments beyond `self`.
These methods are detected and run automatically when the script is executed, unless its `run()` method has been overridden. (When overriding `run()`, `run_tests()` can be called to run all test methods present in the script.)
!!! info
This functionality was ported from [legacy reports](./reports.md) in NetBox v4.0.
### Example
```
from dcim.choices import DeviceStatusChoices
from dcim.models import ConsolePort, Device, PowerPort
from extras.scripts import Script
class DeviceConnectionsReport(Script):
description = "Validate the minimum physical connections for each device"
def test_console_connection(self):
# Check that every console port for every active device has a connection defined.
active = DeviceStatusChoices.STATUS_ACTIVE
for console_port in ConsolePort.objects.prefetch_related('device').filter(device__status=active):
if not console_port.connected_endpoints:
self.log_failure(
f"No console connection defined for {console_port.name}",
console_port.device,
)
elif not console_port.connection_status:
self.log_warning(
f"Console connection for {console_port.name} marked as planned",
console_port.device,
)
else:
self.log_success("Passed", console_port.device)
def test_power_connections(self):
# Check that every active device has at least two connected power supplies.
for device in Device.objects.filter(status=DeviceStatusChoices.STATUS_ACTIVE):
connected_ports = 0
for power_port in PowerPort.objects.filter(device=device):
if power_port.connected_endpoints:
connected_ports += 1
if not power_port.path.is_active:
self.log_warning(
f"Power connection for {power_port.name} marked as planned",
device,
)
if connected_ports < 2:
self.log_failure(
f"{connected_ports} connected power supplies found (2 needed)",
device,
)
else:
self.log_success("Passed", device)
```
## Change Logging
@ -288,9 +357,9 @@ An IPv4 or IPv6 network with a mask. Returns a `netaddr.IPNetwork` object. Two a
## Running Custom Scripts
!!! note
To run a custom script, a user must be assigned via permissions for `Extras > Script`, `Extras > ScriptModule`, and `Core > ManagedFile` objects. They must also be assigned the `extras.run_script` permission. This is achieved by assigning the user (or group) a permission on the Script object and specifying the `run` action in the admin UI as shown below.
To run a custom script, a user must be assigned permissions for `Extras > Script`, `Extras > Script Module`, and `Core > Managed File` objects. They must also be assigned the `extras.run_script` permission. This is achieved by assigning the user (or group) a permission on the Script object and specifying the `run` action in "Permissions" as shown below.
![Adding the run action to a permission](../media/admin_ui_run_permission.png)
![Adding the run action to a permission](../media/run_permission.png)
### Via the Web UI

View File

@ -1,167 +1,63 @@
# NetBox Reports
A NetBox report is a mechanism for validating the integrity of data within NetBox. Running a report allows the user to verify that the objects defined within NetBox meet certain arbitrary conditions. For example, you can write reports to check that:
* All top-of-rack switches have a console connection
* Every router has a loopback interface with an IP address assigned
* Each interface description conforms to a standard format
* Every site has a minimum set of VLANs defined
* All IP addresses have a parent prefix
...and so on. Reports are completely customizable, so there's practically no limit to what you can test for.
## Writing Reports
Reports must be saved as files in the [`REPORTS_ROOT`](../configuration/system.md#reports_root) path (which defaults to `netbox/reports/`). Each file created within this path is considered a separate module. Each module holds one or more reports (Python classes), each of which performs a certain function. The logic of each report is broken into discrete test methods, each of which applies a small portion of the logic comprising the overall test.
!!! warning
The reports path includes a file named `__init__.py`, which registers the path as a Python module. Do not delete this file.
Reports are deprecated beginning with NetBox v4.0, and their functionality has been merged with [custom scripts](./custom-scripts.md). While backward compatibility has been maintained, users are advised to convert legacy reports into custom scripts soon, as support for legacy reports will be removed in a future release.
For example, we can create a module named `devices.py` to hold all of our reports which pertain to devices in NetBox. Within that module, we might define several reports. Each report is defined as a Python class inheriting from `extras.reports.Report`.
## Converting Reports to Scripts
```
### Step 1: Update Class Definition
Change the parent class from `Report` to `Script`:
```python title="Old code"
from extras.reports import Report
class DeviceConnectionsReport(Report):
description = "Validate the minimum physical connections for each device"
class DeviceIPsReport(Report):
description = "Check that every device has a primary IP address assigned"
class MyReport(Report):
```
Within each report class, we'll create a number of test methods to execute our report's logic. In DeviceConnectionsReport, for instance, we want to ensure that every live device has a console connection, an out-of-band management connection, and two power connections.
```python title="New code"
from extras.scripts import Script
class MyReport(Script):
```
from dcim.choices import DeviceStatusChoices
from dcim.models import ConsolePort, Device, PowerPort
from extras.reports import Report
### Step 2: Update Logging Calls
class DeviceConnectionsReport(Report):
description = "Validate the minimum physical connections for each device"
Reports and scripts both provide logging methods, however their signatures differ. All script logging methods accept a message as the first parameter, and accept an object as an optional second parameter.
def test_console_connection(self):
Additionally, the Report class' generic `log()` method is **not** available on Script. Users are advised to replace calls of this method with `log_info()`.
# Check that every console port for every active device has a connection defined.
active = DeviceStatusChoices.STATUS_ACTIVE
for console_port in ConsolePort.objects.prefetch_related('device').filter(device__status=active):
if not console_port.connected_endpoints:
Use the table below as a reference when updating these methods.
| Report (old) | Script (New) |
|-------------------------------|-----------------------------|
| `log(message)` | `log_info(message)` |
| `log_debug(obj, message)`[^1] | `log_debug(message, obj)` |
| `log_info(obj, message)` | `log_info(message, obj)` |
| `log_success(obj, message)` | `log_success(message, obj)` |
| `log_warning(obj, message)` | `log_warning(message, obj)` |
| `log_failure(obj, message)` | `log_failure(message, obj)` |
[^1]: `log_debug()` was added to the Report class in v4.0 to avoid confusion with the same method on Script
```python title="Old code"
self.log_failure(
console_port.device,
"No console connection defined for {}".format(console_port.name)
f"No console connection defined for {console_port.name}"
)
elif not console_port.connection_status:
self.log_warning(
console_port.device,
"Console connection for {} marked as planned".format(console_port.name)
)
else:
self.log_success(console_port.device)
```
def test_power_connections(self):
# Check that every active device has at least two connected power supplies.
for device in Device.objects.filter(status=DeviceStatusChoices.STATUS_ACTIVE):
connected_ports = 0
for power_port in PowerPort.objects.filter(device=device):
if power_port.connected_endpoints:
connected_ports += 1
if not power_port.path.is_active:
self.log_warning(
device,
"Power connection for {} marked as planned".format(power_port.name)
)
if connected_ports < 2:
```python title="New code"
self.log_failure(
device,
"{} connected power supplies found (2 needed)".format(connected_ports)
f"No console connection defined for {console_port.name}",
obj=console_port.device,
)
else:
self.log_success(device)
```
As you can see, reports are completely customizable. Validation logic can be as simple or as complex as needed. Also note that the `description` attribute support markdown syntax. It will be rendered in the report list page.
### Other Notes
!!! warning
Reports should never alter data: If you find yourself using the `create()`, `save()`, `update()`, or `delete()` methods on objects within reports, stop and re-evaluate what you're trying to accomplish. Note that there are no safeguards against the accidental alteration or destruction of data.
Existing reports will be converted to scripts automatically upon upgrading to NetBox v4.0, and previous job history will be retained. However, users are advised to convert legacy reports into custom scripts at the earliest opportunity, as support for legacy reports will be removed in a future release.
## Report Attributes
The `pre_run()` and `post_run()` Report methods have been carried over to Script. These are called automatically by Script's `run()` method. (Note that if you opt to override this method, you are responsible for calling `pre_run()` and `post_run()` where applicable.)
### `description`
A human-friendly description of what your report does.
### `scheduling_enabled`
By default, a report can be scheduled for execution at a later time. Setting `scheduling_enabled` to False disables this ability: Only immediate execution will be possible. (This also disables the ability to set a recurring execution interval.)
### `job_timeout`
Set the maximum allowed runtime for the report. If not set, `RQ_DEFAULT_TIMEOUT` will be used.
## Logging
The following methods are available to log results within a report:
* log(message)
* log_success(object, message=None)
* log_info(object, message)
* log_warning(object, message)
* log_failure(object, message)
The recording of one or more failure messages will automatically flag a report as failed. It is advised to log a success for each object that is evaluated so that the results will reflect how many objects are being reported on. (The inclusion of a log message is optional for successes.) Messages recorded with `log()` will appear in a report's results but are not associated with a particular object or status. Log messages also support using markdown syntax and will be rendered on the report result page.
To perform additional tasks, such as sending an email or calling a webhook, before or after a report is run, extend the `pre_run()` and/or `post_run()` methods, respectively.
By default, reports within a module are ordered alphabetically in the reports list page. To return reports in a specific order, you can define the `report_order` variable at the end of your module. The `report_order` variable is a tuple which contains each Report class in the desired order. Any reports that are omitted from this list will be listed last.
```
from extras.reports import Report
class DeviceConnectionsReport(Report)
pass
class DeviceIPsReport(Report)
pass
report_order = (DeviceIPsReport, DeviceConnectionsReport)
```
Once you have created a report, it will appear in the reports list. Initially, reports will have no results associated with them. To generate results, run the report.
## Running Reports
!!! note
To run a report, a user must be assigned via permissions for `Extras > Report`, `Extras > ReportModule`, and `Core > ManagedFile` objects. They must also be assigned the `extras.run_report` permission. This is achieved by assigning the user (or group) a permission on the Report object and specifying the `run` action in the admin UI as shown below.
![Adding the run action to a permission](../media/admin_ui_run_permission.png)
### Via the Web UI
Reports can be run via the web UI by navigating to the report and clicking the "run report" button at top right. Once a report has been run, its associated results will be included in the report view. It is possible to schedule a report to be executed at specified time in the future. A scheduled report can be canceled by deleting the associated job result object.
### Via the API
To run a report via the API, simply issue a POST request to its `run` endpoint. Reports are identified by their module and class name.
```
POST /api/extras/reports/<module>.<name>/run/
```
Our example report above would be called as:
```
POST /api/extras/reports/devices.DeviceConnectionsReport/run/
```
Optionally `schedule_at` can be passed in the form data with a datetime string to schedule a script at the specified date and time.
### Via the CLI
Reports can be run on the CLI by invoking the management command:
```
python3 manage.py runreport <module>
```
where ``<module>`` is the name of the python file in the ``reports`` directory without the ``.py`` extension. One or more report modules may be specified.
The `is_valid()` method on Report is no longer needed and has been removed.

View File

@ -80,6 +80,18 @@ Run the following command to update the device type definition validation schema
This will automatically update the schema file at `contrib/generated_schema.json`.
### Update & Compile Translations
Log into [Transifex](https://app.transifex.com/netbox-community/netbox/dashboard/) to download the updated string maps. Download the resource (portable object, or `.po`) file for each language and save them to `netbox/translations/$lang/LC_MESSAGES/django.po`, overwriting the current files. (Be sure to click the **Download for use** link.)
![Transifex download](../media/development/transifex_download.png)
Once the resource files for all languages have been updated, compile the machine object (`.mo`) files using the `compilemessages` management command:
```nohighlight
./manage.py compilemessages
```
### Update Version and Changelog
* Update the `VERSION` constant in `settings.py` to the new release version.
@ -90,7 +102,7 @@ Commit these changes to the `develop` branch and push upstream.
### Verify CI Build Status
Ensure that continuous integration testing on the `develop` branch is completing successfully. If it fails, take action to correct the failure before proceding with the release.
Ensure that continuous integration testing on the `develop` branch is completing successfully. If it fails, take action to correct the failure before proceeding with the release.
### Submit a Pull Request

View File

@ -0,0 +1,30 @@
# Translations
NetBox coordinates all translation work using the [Transifex](https://explore.transifex.com/netbox-community/netbox/) platform. Signing up for a Transifex account is free.
All language translations in NetBox are generated from the source file found at `netbox/translations/en/LC_MESSAGES/django.po`. This file contains the original English strings with empty mappings, and is generated as part of NetBox's release process. Transifex updates source strings from this file on a recurring basis, so new translation strings will appear in the platform automatically as it is updated in the code base.
Reviewers log into Transifex and navigate to their designated language(s) to translate strings. The initial translation for most strings will be machine-generated via the AWS Translate service. Human reviewers are responsible for reviewing these translations and making corrections where necessary.
Immediately prior to each NetBox release, the translation maps for all completed languages will be downloaded from Transifex, compiled, and checked into the NetBox code base by a maintainer.
## Updating Translation Sources
To update the English `.po` file from which all translations are derived, use the `makemessages` management command:
```nohighlight
./manage.py makemessages -l en
```
Then, commit the change and push to the `develop` branch on GitHub. After some time, any new strings will appear for translation on Transifex automatically.
## Proposing New Languages
If you'd like to add support for a new language to NetBox, the first step is to [submit a GitHub issue](https://github.com/netbox-community/netbox/issues/new?assignees=&labels=type%3A+translation&projects=&template=translation.yaml) to capture the proposal. While we'd like to add as many languages as possible, we do need to limit the rate at which new languages are added. New languages will be selected according to community interest and the number of volunteers who sign up as translators.
Once a proposed language has been approved, a NetBox maintainer will:
* Add it to the Transifex platform
* Designate one or more reviewers
* Create the initial machine-generated translations for review
* Add it to the list of supported languages

View File

@ -39,7 +39,7 @@ When rendered for a specific NetBox device, the template's `device` variable wil
### Context Data
The objet for which the configuration is being rendered is made available as template context as `device` or `virtualmachine` for devices and virtual machines, respectively. Additionally, NetBox model classes can be accessed by the app or plugin in which they reside. For example:
The object for which the configuration is being rendered is made available as template context as `device` or `virtualmachine` for devices and virtual machines, respectively. Additionally, NetBox model classes can be accessed by the app or plugin in which they reside. For example:
```
There are {{ dcim.Site.objects.count() }} sites.
@ -70,6 +70,11 @@ This request will trigger resolution of the device's preferred config template i
If no config template has been assigned to any of these three objects, the request will fail.
The configuration can be rendered as JSON or as plaintext by setting the `Accept:` HTTP header. For example:
* `Accept: application/json`
* `Accept: text/plain`
### General Purpose Use
NetBox config templates can also be rendered without being tied to any specific device, using a separate general purpose REST API endpoint. Any data included with a POST request to this endpoint will be passed as context data for the template.

View File

@ -28,4 +28,4 @@ For more detail, see the reference documentation for NetBox's [conditional logic
## Event Rule Processing
When a change is detected, any resulting events are placed into a Redis queue for processing. This allows the user's request to complete without needing to wait for the outgoing event(s) to be processed. The events are then extracted from the queue by the `rqworker` process. The current event queue and any failed events can be inspected in the admin UI under System > Background Tasks.
When a change is detected, any resulting events are placed into a Redis queue for processing. This allows the user's request to complete without needing to wait for the outgoing event(s) to be processed. The events are then extracted from the queue by the `rqworker` process. The current event queue and any failed events can be inspected under System > Background Tasks.

View File

@ -1,6 +1,6 @@
# Synchronized Data
Several models in NetBox support the automatic synchronization of local data from a designated remote source. For example, [configuration templates](./configuration-rendering.md) defined in NetBox can source their content from text files stored in a remote git repository. This accomplished using the core [data source](../models/core/datasource.md) and [data file](../models/core/datafile.md) models.
Several models in NetBox support the automatic synchronization of local data from a designated remote source. For example, [configuration templates](./configuration-rendering.md) defined in NetBox can source their content from text files stored in a remote git repository. This is accomplished using the core [data source](../models/core/datasource.md) and [data file](../models/core/datafile.md) models.
To enable remote data synchronization, the NetBox administrator first designates one or more remote data sources. NetBox currently supports the following source types:

View File

@ -4,7 +4,7 @@
NetBox is the leading solution for modeling and documenting modern networks. By combining the traditional disciplines of IP address management (IPAM) and datacenter infrastructure management (DCIM) with powerful APIs and extensions, NetBox provides the ideal "source of truth" to power network automation. Read on to discover why thousands of organizations worldwide put NetBox at the heart of their infrastructure.
[![NetBox UI](./media/screenshots/netbox-ui.png)](./media/screenshots/netbox-ui.png)
[![NetBox UI](./media/screenshots/home-light.png)](./media/screenshots/home-light.png)
## :material-server-network: Built for Networks

View File

@ -58,3 +58,6 @@ You should see output similar to the following:
If the NetBox service fails to start, issue the command `journalctl -eu netbox` to check for log messages that may indicate the problem.
Once you've verified that the WSGI workers are up and running, move on to HTTP server setup.
!!! note
There is a bug in the current stable release of gunicorn (v21.2.0) where automatic restarts of the worker processes can result in 502 errors under heavy load. (See [gunicorn bug #3038](https://github.com/benoitc/gunicorn/issues/3038) for more detail.) Users who encounter this issue may opt to downgrade to an earlier, unaffected release of gunicorn (`pip install gunicorn==20.1.0`). Note, however, that this earlier release does not officially support Python 3.11.

View File

@ -73,9 +73,9 @@ If no body template is specified, the request body will be populated with a JSON
## Webhook Processing
Using [Event Rules](../features/event-rules.md), when a change is detected, any resulting webhooks are placed into a Redis queue for processing. This allows the user's request to complete without needing to wait for the outgoing webhook(s) to be processed. The webhooks are then extracted from the queue by the `rqworker` process and HTTP requests are sent to their respective destinations. The current webhook queue and any failed webhooks can be inspected in the admin UI under System > Background Tasks.
Using [Event Rules](../features/event-rules.md), when a change is detected, any resulting webhooks are placed into a Redis queue for processing. This allows the user's request to complete without needing to wait for the outgoing webhook(s) to be processed. The webhooks are then extracted from the queue by the `rqworker` process and HTTP requests are sent to their respective destinations. The current webhook queue and any failed webhooks can be inspected under System > Background Tasks.
A request is considered successful if the response has a 2XX status code; otherwise, the request is marked as having failed. Failed requests may be retried manually via the admin UI.
A request is considered successful if the response has a 2XX status code; otherwise, the request is marked as having failed. Failed requests may be requeued manually under System > Background Tasks.
## Troubleshooting
@ -106,6 +106,6 @@ Content-Type: application/x-www-form-urlencoded
------------
```
Note that `webhook_receiver` does not actually _do_ anything with the information received: It merely prints the request headers and body for inspection.
Note that `webhook_receiver` does not actually _do_ anything with the information received: It merely prints the request headers and body for inspection. If you don't see any output, check that the `rqworker` process is running and that webhook events are being placed into the queue.
Now, when the NetBox webhook is triggered and processed, you should see its headers and content appear in the terminal where the webhook receiver is listening. If you don't, check that the `rqworker` process is running and that webhook events are being placed into the queue (visible under the NetBox admin UI).
Webhook results can be found in the NetBox admin UI under the Background Tasks section. You can see any finished or failed runs, as well as the error log for failed webhooks.

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

After

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 173 KiB

After

Width:  |  Height:  |  Size: 316 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 309 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

After

Width:  |  Height:  |  Size: 356 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

After

Width:  |  Height:  |  Size: 235 KiB

View File

@ -14,7 +14,7 @@ The IKE version employed (v1 or v2).
### Mode
The IKE mode employed (main or aggressive).
The mode employed (main or aggressive) when IKEv1 is in use. This setting is not supported for IKEv2.
### Proposals

View File

@ -47,3 +47,14 @@ class ReminderWidget(DashboardWidget):
def render(self, request):
return self.config.get('content')
```
## Initialization
To register the widget, it becomes essential to import the widget module. The recommended approach is to accomplish this within the `ready` method situated in your `PluginConfig`:
```python
class FooBarConfig(PluginConfig):
def ready(self):
super().ready()
from . import widgets # point this to the above widget module you created
```

View File

@ -20,4 +20,4 @@ backends = [MyDataBackend]
!!! tip
The path to the list of search indexes can be modified by setting `data_backends` in the PluginConfig instance.
::: core.data_backends.DataBackend
::: netbox.data_backends.DataBackend

View File

@ -1,5 +1,60 @@
# NetBox v3.7
## v3.7.3 (FUTURE)
---
## v3.7.2 (2024-02-05)
### Enhancements
* [#13729](https://github.com/netbox-community/netbox/issues/13729) - Omit sensitive data source parameters from change log data
* [#14645](https://github.com/netbox-community/netbox/issues/14645) - Limit the number of assigned IP addresses displayed under interfaces list
### Bug Fixes
* [#14500](https://github.com/netbox-community/netbox/issues/14500) - Optimize calculation of available child prefixes & ranges when viewing a prefix
* [#14511](https://github.com/netbox-community/netbox/issues/14511) - Fix GraphQL support for interfaces connected to provider networks
* [#14572](https://github.com/netbox-community/netbox/issues/14572) - Correct the number of jobs listed for individual report & script modules
* [#14703](https://github.com/netbox-community/netbox/issues/14703) - Revert to the default layout when encountering a misconfigured dashboard
* [#14755](https://github.com/netbox-community/netbox/issues/14755) - Fix validation of choice values & labels when creating a custom field choice set via the REST API
* [#14838](https://github.com/netbox-community/netbox/issues/14838) - Avoid corrupting JSON data when changing the action type while editing an event rule
* [#14839](https://github.com/netbox-community/netbox/issues/14839) - Fix form validation error when attempting to terminate a tunnel to a virtual machine interface
* [#14840](https://github.com/netbox-community/netbox/issues/14840) - Fix `NoReverseMatch` exception when rendering a custom field which references a user
* [#14847](https://github.com/netbox-community/netbox/issues/14847) - IKE policy mode may be set inly when IKEv1 is selected
* [#14851](https://github.com/netbox-community/netbox/issues/14851) - Automatically remove any associated bookmarks when deleting a user
* [#14879](https://github.com/netbox-community/netbox/issues/14879) - Include custom fields in REST API representation of data sources
* [#14885](https://github.com/netbox-community/netbox/issues/14885) - Add missing "group" field to VPN tunnel creation form
* [#14892](https://github.com/netbox-community/netbox/issues/14892) - Fix exception when running report/script via command line due to missing username
* [#14920](https://github.com/netbox-community/netbox/issues/14920) - Include button to display available status choices when bulk importing virtual device contexts
* [#14945](https://github.com/netbox-community/netbox/issues/14945) - Fix "select all" button for device type components
* [#14947](https://github.com/netbox-community/netbox/issues/14947) - Ensure that application & removal of tags is always recorded in an object's change log
* [#14962](https://github.com/netbox-community/netbox/issues/14962) - Fix config context rendering for VMs assigned directly to a site (rather than via a cluster)
* [#14999](https://github.com/netbox-community/netbox/issues/14999) - Fix "create & add another" link for interface FHRP group assignment
* [#15015](https://github.com/netbox-community/netbox/issues/15015) - Pre-populate assigned tenant when allocating next available IP address under prefix view
* [#15020](https://github.com/netbox-community/netbox/issues/15020) - Automatically update all VMs when changing a cluster's assigned site
* [#15025](https://github.com/netbox-community/netbox/issues/15025) - The `can_add()` template filter should accept a model (not an instance)
---
## v3.7.1 (2024-01-17)
### Bug Fixes
* [#13844](https://github.com/netbox-community/netbox/issues/13844) - Use `available_at_site` filter when filtering VLANs under prefix form
* [#14663](https://github.com/netbox-community/netbox/issues/14663) - Fix tunnel creation when setting initial termination to a VM interface
* [#14706](https://github.com/netbox-community/netbox/issues/14706) - Relax one-to-one mapping of tunnel termination to IP address
* [#14709](https://github.com/netbox-community/netbox/issues/14709) - Fix typo in tunnel termination type choice name
* [#14749](https://github.com/netbox-community/netbox/issues/14749) - Remove errant translation wrapper from `installed_device` on DeviceBay
* [#14778](https://github.com/netbox-community/netbox/issues/14778) - Custom field API serializer should accept null values for all optional fields
* [#14791](https://github.com/netbox-community/netbox/issues/14791) - Hide available prefixes when searching within a parent prefix
* [#14793](https://github.com/netbox-community/netbox/issues/14793) - Add missing Diffie-Hellman group 15
* [#14816](https://github.com/netbox-community/netbox/issues/14816) - Ensure default contact assignment ordering is consistent
* [#14817](https://github.com/netbox-community/netbox/issues/14817) - Relax required fields for IKE & IPSec models on bulk import
* [#14827](https://github.com/netbox-community/netbox/issues/14827) - Ensure all matching event rules are processed in response to an event
---
## v3.7.0 (2023-12-29)
### Breaking Changes

View File

@ -2,14 +2,33 @@
## v4.0.0 (FUTURE)
### Breaking Changes
* The deprecated `device_role` & `device_role_id` filters for devices have been removed. (Use `role` and `role_id` instead.)
### New Features
#### Complete UI Refresh ([#12128](https://github.com/netbox-community/netbox/issues/12128))
The NetBox user interface has been completely refreshed and updated.
### Enhancements
* [#12851](https://github.com/netbox-community/netbox/issues/12851) - Replace bleach HTML sanitization library with nh3
* [#14637](https://github.com/netbox-community/netbox/issues/14637) - Upgrade to Django 5.0
* [#14672](https://github.com/netbox-community/netbox/issues/14672) - Add support for Python 3.12
* [#14728](https://github.com/netbox-community/netbox/issues/14728) - The plugins list view has been moved from the legacy admin UI to the main NetBox UI
* [#14729](https://github.com/netbox-community/netbox/issues/14729) - All background task views have been moved from the legacy admin UI to the main NetBox UI
### Other Changes
* [#12325](https://github.com/netbox-community/netbox/issues/12325) - The Django admin UI is now disabled by default (set `DJANGO_ADMIN_ENABLED` to True to enable it)
* [#12795](https://github.com/netbox-community/netbox/issues/12795) - NetBox now uses a custom User model rather than the stock model provided by Django
* [#13647](https://github.com/netbox-community/netbox/issues/13647) - Squash all database migrations prior to v3.7
* [#14092](https://github.com/netbox-community/netbox/issues/14092) - Remove backward compatibility for importing plugin resources from `extras.plugins` (now `netbox.plugins`)
* [#14638](https://github.com/netbox-community/netbox/issues/14638) - Drop support for Python 3.8 and 3.9
* [#14657](https://github.com/netbox-community/netbox/issues/14657) - Remove backward compatibility for old permissions mapping under `ActionsMixin`
* [#14658](https://github.com/netbox-community/netbox/issues/14658) - Remove backward compatibility for importing `process_webhook()` (now `extras.webhooks.send_webhook()`)
* [#14740](https://github.com/netbox-community/netbox/issues/14740) - Remove the obsolete `BootstrapMixin` form mixin class
* [#15099](https://github.com/netbox-community/netbox/issues/15099) - Remove obsolete `device_role` and `device_role_id` filters for devices
* [#15100](https://github.com/netbox-community/netbox/issues/15100) - Remove obsolete `NullableCharField` class

View File

@ -52,6 +52,7 @@ extra_css:
markdown_extensions:
- admonition
- attr_list
- footnotes
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
@ -286,6 +287,7 @@ nav:
- User Preferences: 'development/user-preferences.md'
- Web UI: 'development/web-ui.md'
- Internationalization: 'development/internationalization.md'
- Translations: 'development/translations.md'
- Release Checklist: 'development/release-checklist.md'
- git Cheat Sheet: 'development/git-cheat-sheet.md'
- Release Notes:

View File

@ -36,7 +36,7 @@ class DataSourceSerializer(NetBoxModelSerializer):
model = DataSource
fields = [
'id', 'url', 'display', 'name', 'type', 'source_url', 'enabled', 'status', 'description', 'comments',
'parameters', 'ignore_rules', 'created', 'last_updated', 'file_count',
'parameters', 'ignore_rules', 'custom_fields', 'created', 'last_updated', 'file_count',
]

View File

@ -21,7 +21,7 @@ class DataSourceBulkEditForm(NetBoxModelBulkEditForm):
enabled = forms.NullBooleanField(
required=False,
widget=BulkEditNullBooleanSelect(),
label=_('Enforce unique space')
label=_('Enabled')
)
description = forms.CharField(
label=_('Description'),

View File

@ -119,10 +119,7 @@ class JobFilterForm(SavedFiltersMixin, FilterForm):
user = DynamicModelMultipleChoiceField(
queryset=get_user_model().objects.all(),
required=False,
label=_('User'),
widget=APISelectMultiple(
api_url='/api/users/users/',
)
label=_('User')
)

View File

@ -9,9 +9,9 @@ class Command(_Command):
"""
This built-in management command enables the creation of new database schema migration files, which should
never be required by and ordinary user. We prevent this command from executing unless the configuration
indicates that the user is a developer (i.e. configuration.DEVELOPER == True).
indicates that the user is a developer (i.e. configuration.DEVELOPER == True), or it was run with --check.
"""
if not settings.DEVELOPER:
if not kwargs['check_changes'] and not settings.DEVELOPER:
raise CommandError(
"This command is available for development purposes only. It will\n"
"NOT resolve any issues with missing or unapplied migrations. For assistance,\n"

View File

@ -14,6 +14,7 @@ from django.utils import timezone
from django.utils.module_loading import import_string
from django.utils.translation import gettext as _
from netbox.constants import CENSOR_TOKEN, CENSOR_TOKEN_CHANGED
from netbox.models import PrimaryModel
from netbox.models.features import JobsMixin
from netbox.registry import registry
@ -130,6 +131,28 @@ class DataSource(JobsMixin, PrimaryModel):
'source_url': f"URLs for local sources must start with file:// (or specify no scheme)"
})
def to_objectchange(self, action):
objectchange = super().to_objectchange(action)
# Censor any backend parameters marked as sensitive in the serialized data
pre_change_params = {}
post_change_params = {}
if objectchange.prechange_data:
pre_change_params = objectchange.prechange_data.get('parameters') or {} # parameters may be None
if objectchange.postchange_data:
post_change_params = objectchange.postchange_data.get('parameters') or {}
for param in self.backend_class.sensitive_parameters:
if post_change_params.get(param):
if post_change_params[param] != pre_change_params.get(param):
# Set the "changed" token if the parameter's value has been modified
post_change_params[param] = CENSOR_TOKEN_CHANGED
else:
post_change_params[param] = CENSOR_TOKEN
if pre_change_params.get(param):
pre_change_params[param] = CENSOR_TOKEN
return objectchange
def enqueue_sync_job(self, request):
"""
Enqueue a background job to synchronize the DataSource by calling sync().

View File

@ -0,0 +1,122 @@
from django.test import TestCase
from core.models import DataSource
from extras.choices import ObjectChangeActionChoices
from netbox.constants import CENSOR_TOKEN, CENSOR_TOKEN_CHANGED
class DataSourceChangeLoggingTestCase(TestCase):
def test_password_added_on_create(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/',
parameters={
'username': 'jeff',
'password': 'foobar123',
}
)
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_CREATE)
self.assertIsNone(objectchange.prechange_data)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN_CHANGED)
def test_password_added_on_update(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/'
)
datasource.snapshot()
# Add a blank password
datasource.parameters = {
'username': 'jeff',
'password': '',
}
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertIsNone(objectchange.prechange_data['parameters'])
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], '')
# Add a password
datasource.parameters = {
'username': 'jeff',
'password': 'foobar123',
}
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN_CHANGED)
def test_password_changed(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/',
parameters={
'username': 'jeff',
'password': 'password1',
}
)
datasource.snapshot()
# Change the password
datasource.parameters['password'] = 'password2'
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.prechange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.prechange_data['parameters']['password'], CENSOR_TOKEN)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN_CHANGED)
def test_password_removed_on_update(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/',
parameters={
'username': 'jeff',
'password': 'foobar123',
}
)
datasource.snapshot()
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.prechange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.prechange_data['parameters']['password'], CENSOR_TOKEN)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN)
# Remove the password
datasource.parameters['password'] = ''
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.prechange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.prechange_data['parameters']['password'], CENSOR_TOKEN)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], '')
def test_password_not_modified(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/',
parameters={
'username': 'username1',
'password': 'foobar123',
}
)
datasource.snapshot()
# Remove the password
datasource.parameters['username'] = 'username2'
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.prechange_data['parameters']['username'], 'username1')
self.assertEqual(objectchange.prechange_data['parameters']['password'], CENSOR_TOKEN)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'username2')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN)

View File

@ -1288,18 +1288,6 @@ class DeviceComponentFilterSet(django_filters.FilterSet):
to_field_name='name',
label=_('Virtual Chassis'),
)
# TODO: Remove in v4.0
device_role_id = django_filters.ModelMultipleChoiceFilter(
field_name='device__role',
queryset=DeviceRole.objects.all(),
label=_('Device role (ID)'),
)
device_role = django_filters.ModelMultipleChoiceFilter(
field_name='device__role__slug',
queryset=DeviceRole.objects.all(),
to_field_name='slug',
label=_('Device role (slug)'),
)
def search(self, queryset, name, value):
if not value.strip():

View File

@ -727,7 +727,7 @@ class PowerOutletImportForm(NetBoxModelImportForm):
help_text=_('Local power port which feeds this outlet')
)
feed_leg = CSVChoiceField(
label=_('Feed lag'),
label=_('Feed leg'),
choices=PowerOutletFeedLegChoices,
required=False,
help_text=_('Electrical phase (for three-phase circuits)')
@ -1359,6 +1359,10 @@ class VirtualDeviceContextImportForm(NetBoxModelImportForm):
to_field_name='name',
help_text='Assigned tenant'
)
status = CSVChoiceField(
label=_('Status'),
choices=VirtualDeviceContextStatusChoices,
)
class Meta:
fields = [

View File

@ -393,10 +393,7 @@ class RackReservationFilterForm(TenancyFilterForm, NetBoxModelFilterSetForm):
user_id = DynamicModelMultipleChoiceField(
queryset=get_user_model().objects.all(),
required=False,
label=_('User'),
widget=APISelectMultiple(
api_url='/api/users/users/',
)
label=_('User')
)
tag = TagFilterField(model)
@ -551,8 +548,7 @@ class ModuleTypeFilterForm(NetBoxModelFilterSetForm):
manufacturer_id = DynamicModelMultipleChoiceField(
queryset=Manufacturer.objects.all(),
required=False,
label=_('Manufacturer'),
fetch_trigger='open'
label=_('Manufacturer')
)
part_number = forms.CharField(
label=_('Part number'),
@ -828,8 +824,7 @@ class VirtualDeviceContextFilterForm(
device = DynamicModelMultipleChoiceField(
queryset=Device.objects.all(),
required=False,
label=_('Device'),
fetch_trigger='open'
label=_('Device')
)
status = forms.MultipleChoiceField(
label=_('Status'),
@ -855,8 +850,7 @@ class ModuleFilterForm(LocalConfigContextFilterForm, TenancyFilterForm, NetBoxMo
manufacturer_id = DynamicModelMultipleChoiceField(
queryset=Manufacturer.objects.all(),
required=False,
label=_('Manufacturer'),
fetch_trigger='open'
label=_('Manufacturer')
)
module_type_id = DynamicModelMultipleChoiceField(
queryset=ModuleType.objects.all(),
@ -864,8 +858,7 @@ class ModuleFilterForm(LocalConfigContextFilterForm, TenancyFilterForm, NetBoxMo
query_params={
'manufacturer_id': '$manufacturer_id'
},
label=_('Type'),
fetch_trigger='open'
label=_('Type')
)
status = forms.MultipleChoiceField(
label=_('Status'),
@ -1414,8 +1407,7 @@ class InventoryItemFilterForm(DeviceComponentFilterForm):
role_id = DynamicModelMultipleChoiceField(
queryset=InventoryItemRole.objects.all(),
required=False,
label=_('Role'),
fetch_trigger='open'
label=_('Role')
)
manufacturer_id = DynamicModelMultipleChoiceField(
queryset=Manufacturer.objects.all(),

View File

@ -1,6 +1,6 @@
import graphene
from circuits.graphql.types import CircuitTerminationType
from circuits.models import CircuitTermination
from circuits.graphql.types import CircuitTerminationType, ProviderNetworkType
from circuits.models import CircuitTermination, ProviderNetwork
from dcim.graphql.types import (
ConsolePortTemplateType,
ConsolePortType,
@ -167,3 +167,42 @@ class InventoryItemComponentType(graphene.Union):
return PowerPortType
if type(instance) is RearPort:
return RearPortType
class ConnectedEndpointType(graphene.Union):
class Meta:
types = (
CircuitTerminationType,
ConsolePortType,
ConsoleServerPortType,
FrontPortType,
InterfaceType,
PowerFeedType,
PowerOutletType,
PowerPortType,
ProviderNetworkType,
RearPortType,
)
@classmethod
def resolve_type(cls, instance, info):
if type(instance) is CircuitTermination:
return CircuitTerminationType
if type(instance) is ConsolePortType:
return ConsolePortType
if type(instance) is ConsoleServerPort:
return ConsoleServerPortType
if type(instance) is FrontPort:
return FrontPortType
if type(instance) is Interface:
return InterfaceType
if type(instance) is PowerFeed:
return PowerFeedType
if type(instance) is PowerOutlet:
return PowerOutletType
if type(instance) is PowerPort:
return PowerPortType
if type(instance) is ProviderNetwork:
return ProviderNetworkType
if type(instance) is RearPort:
return RearPortType

View File

@ -13,7 +13,7 @@ class CabledObjectMixin:
class PathEndpointMixin:
connected_endpoints = graphene.List('dcim.graphql.gfk_mixins.LinkPeerType')
connected_endpoints = graphene.List('dcim.graphql.gfk_mixins.ConnectedEndpointType')
def resolve_connected_endpoints(self, info):
# Handle empty values

View File

@ -1115,7 +1115,7 @@ class DeviceBay(ComponentModel, TrackingModelMixin):
installed_device = models.OneToOneField(
to='dcim.Device',
on_delete=models.SET_NULL,
related_name=_('parent_bay'),
related_name='parent_bay',
blank=True,
null=True
)

View File

@ -35,6 +35,9 @@ DEVICEBAY_STATUS = """
"""
INTERFACE_IPADDRESSES = """
{% if value.count > 3 %}
<a href="{% url 'ipam:ipaddress_list' %}?interface_id={{ record.pk }}">{{ value.count }}</a>
{% else %}
{% for ip in value.all %}
{% if ip.status != 'active' %}
<a href="{{ ip.get_absolute_url }}" class="badge text-bg-{{ ip.get_status_color }}" data-bs-toggle="tooltip" data-bs-placement="left" title="{{ ip.get_status_display }}">{{ ip }}</a>
@ -42,6 +45,7 @@ INTERFACE_IPADDRESSES = """
<a href="{{ ip.get_absolute_url }}">{{ ip }}</a>
{% endif %}
{% endfor %}
{% endif %}
"""
INTERFACE_FHRPGROUPS = """

View File

@ -58,7 +58,11 @@ class DeviceComponentsView(generic.ObjectChildrenView):
return self.child_model.objects.restrict(request.user, 'view').filter(device=parent)
class DeviceTypeComponentsView(DeviceComponentsView):
class DeviceTypeComponentsView(generic.ObjectChildrenView):
actions = {
**DEFAULT_ACTION_PERMISSIONS,
'bulk_rename': {'change'},
}
queryset = DeviceType.objects.all()
template_name = 'dcim/devicetype/component_templates.html'
viewname = None # Used for return_url resolution

View File

@ -3,6 +3,7 @@ from django.core.exceptions import ObjectDoesNotExist
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import extend_schema_field
from rest_framework import serializers
from rest_framework.fields import ListField
from core.api.nested_serializers import NestedDataSourceSerializer, NestedDataFileSerializer, NestedJobSerializer
from core.api.serializers import JobSerializer
@ -49,8 +50,6 @@ __all__ = (
'SavedFilterSerializer',
'ScriptDetailSerializer',
'ScriptInputSerializer',
'ScriptLogMessageSerializer',
'ScriptOutputSerializer',
'ScriptSerializer',
'TagSerializer',
'WebhookSerializer',
@ -126,11 +125,15 @@ class CustomFieldSerializer(ValidatedModelSerializer):
type = ChoiceField(choices=CustomFieldTypeChoices)
object_type = ContentTypeField(
queryset=ContentType.objects.all(),
required=False
required=False,
allow_null=True
)
filter_logic = ChoiceField(choices=CustomFieldFilterLogicChoices, required=False)
data_type = serializers.SerializerMethodField()
choice_set = NestedCustomFieldChoiceSetSerializer(required=False)
choice_set = NestedCustomFieldChoiceSetSerializer(
required=False,
allow_null=True
)
ui_visible = ChoiceField(choices=CustomFieldUIVisibleChoices, required=False)
ui_editable = ChoiceField(choices=CustomFieldUIEditableChoices, required=False)
@ -171,6 +174,12 @@ class CustomFieldChoiceSetSerializer(ValidatedModelSerializer):
choices=CustomFieldChoiceSetBaseChoices,
required=False
)
extra_choices = serializers.ListField(
child=serializers.ListField(
min_length=2,
max_length=2
)
)
class Meta:
model = CustomFieldChoiceSet
@ -593,22 +602,6 @@ class ScriptInputSerializer(serializers.Serializer):
return value
class ScriptLogMessageSerializer(serializers.Serializer):
status = serializers.SerializerMethodField(read_only=True)
message = serializers.SerializerMethodField(read_only=True)
def get_status(self, instance):
return instance[0]
def get_message(self, instance):
return instance[1]
class ScriptOutputSerializer(serializers.Serializer):
log = ScriptLogMessageSerializer(many=True, read_only=True)
output = serializers.CharField(read_only=True)
#
# Change logging
#

View File

@ -20,7 +20,6 @@ router.register('image-attachments', views.ImageAttachmentViewSet)
router.register('journal-entries', views.JournalEntryViewSet)
router.register('config-contexts', views.ConfigContextViewSet)
router.register('config-templates', views.ConfigTemplateViewSet)
router.register('reports', views.ReportViewSet, basename='report')
router.register('scripts', views.ScriptViewSet, basename='script')
router.register('object-changes', views.ObjectChangeViewSet)
router.register('content-types', views.ContentTypeViewSet)

View File

@ -16,7 +16,6 @@ from core.choices import JobStatusChoices
from core.models import Job
from extras import filtersets
from extras.models import *
from extras.reports import get_module_and_report, run_report
from extras.scripts import get_module_and_script, run_script
from netbox.api.authentication import IsAuthenticatedOrLoginNotRequired
from netbox.api.features import SyncedDataMixin
@ -211,111 +210,6 @@ class ConfigTemplateViewSet(SyncedDataMixin, ConfigTemplateRenderMixin, NetBoxMo
return self.render_configtemplate(request, configtemplate, context)
#
# Reports
#
class ReportViewSet(ViewSet):
permission_classes = [IsAuthenticatedOrLoginNotRequired]
_ignore_model_permissions = True
schema = None
lookup_value_regex = '[^/]+' # Allow dots
def _get_report(self, pk):
try:
module_name, report_name = pk.split('.', maxsplit=1)
except ValueError:
raise Http404
module, report = get_module_and_report(module_name, report_name)
if report is None:
raise Http404
return module, report
def list(self, request):
"""
Compile all reports and their related results (if any). Result data is deferred in the list view.
"""
results = {
job.name: job
for job in Job.objects.filter(
object_type=ContentType.objects.get(app_label='extras', model='reportmodule'),
status__in=JobStatusChoices.TERMINAL_STATE_CHOICES
).order_by('name', '-created').distinct('name').defer('data')
}
report_list = []
for report_module in ReportModule.objects.restrict(request.user):
report_list.extend([report() for report in report_module.reports.values()])
# Attach Job objects to each report (if any)
for report in report_list:
report.result = results.get(report.name, None)
serializer = serializers.ReportSerializer(report_list, many=True, context={
'request': request,
})
return Response({'count': len(report_list), 'results': serializer.data})
def retrieve(self, request, pk):
"""
Retrieve a single Report identified as "<module>.<report>".
"""
module, report = self._get_report(pk)
# Retrieve the Report and Job, if any.
object_type = ContentType.objects.get(app_label='extras', model='reportmodule')
report.result = Job.objects.filter(
object_type=object_type,
name=report.name,
status__in=JobStatusChoices.TERMINAL_STATE_CHOICES
).first()
serializer = serializers.ReportDetailSerializer(report, context={
'request': request
})
return Response(serializer.data)
@action(detail=True, methods=['post'])
def run(self, request, pk):
"""
Run a Report identified as "<module>.<script>" and return the pending Job as the result
"""
# Check that the user has permission to run reports.
if not request.user.has_perm('extras.run_report'):
raise PermissionDenied("This user does not have permission to run reports.")
# Check that at least one RQ worker is running
if not Worker.count(get_connection('default')):
raise RQWorkerNotRunningException()
# Retrieve and run the Report. This will create a new Job.
module, report_cls = self._get_report(pk)
report = report_cls
input_serializer = serializers.ReportInputSerializer(
data=request.data,
context={'report': report}
)
if input_serializer.is_valid():
report.result = Job.enqueue(
run_report,
instance=module,
name=report.class_name,
user=request.user,
job_timeout=report.job_timeout,
schedule_at=input_serializer.validated_data.get('schedule_at'),
interval=input_serializer.validated_data.get('interval')
)
serializer = serializers.ReportDetailSerializer(report, context={'request': request})
return Response(serializer.data)
return Response(input_serializer.errors, status=status.HTTP_400_BAD_REQUEST)
#
# Scripts
#

View File

@ -1,3 +1,5 @@
import logging
from django.utils.translation import gettext_lazy as _
from utilities.choices import ButtonColorChoices, ChoiceSet
@ -164,6 +166,7 @@ class JournalEntryKindChoices(ChoiceSet):
class LogLevelChoices(ChoiceSet):
LOG_DEBUG = 'debug'
LOG_DEFAULT = 'default'
LOG_SUCCESS = 'success'
LOG_INFO = 'info'
@ -171,6 +174,7 @@ class LogLevelChoices(ChoiceSet):
LOG_FAILURE = 'failure'
CHOICES = (
(LOG_DEBUG, _('Debug'), 'teal'),
(LOG_DEFAULT, _('Default'), 'gray'),
(LOG_SUCCESS, _('Success'), 'green'),
(LOG_INFO, _('Info'), 'cyan'),
@ -178,6 +182,15 @@ class LogLevelChoices(ChoiceSet):
(LOG_FAILURE, _('Failure'), 'red'),
)
SYSTEM_LEVELS = {
LOG_DEBUG: logging.DEBUG,
LOG_DEFAULT: logging.INFO,
LOG_SUCCESS: logging.INFO,
LOG_INFO: logging.INFO,
LOG_WARNING: logging.WARNING,
LOG_FAILURE: logging.ERROR,
}
class DurationChoices(ChoiceSet):

View File

@ -53,13 +53,13 @@ def get_dashboard(user):
return dashboard
def get_default_dashboard():
def get_default_dashboard(config=None):
from extras.models import Dashboard
dashboard = Dashboard()
default_config = settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD
config = config or settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD
for widget in default_config:
for widget in config:
id = str(uuid.uuid4())
dashboard.layout.append({
'id': id,

View File

@ -71,17 +71,17 @@ def enqueue_object(queue, instance, user, request_id, action):
})
def process_event_rules(event_rules, model_name, event, data, username, snapshots=None, request_id=None):
try:
def process_event_rules(event_rules, model_name, event, data, username=None, snapshots=None, request_id=None):
if username:
user = get_user_model().objects.get(username=username)
except ObjectDoesNotExist:
else:
user = None
for event_rule in event_rules:
# Evaluate event rule conditions (if any)
if not event_rule.eval_conditions(data):
return
continue
# Webhooks
if event_rule.action_type == EventRuleActionChoices.WEBHOOK:

View File

@ -381,8 +381,7 @@ class ConfigContextFilterForm(SavedFiltersMixin, FilterForm):
cluster_type_id = DynamicModelMultipleChoiceField(
queryset=ClusterType.objects.all(),
required=False,
label=_('Cluster types'),
fetch_trigger='open'
label=_('Cluster types')
)
cluster_group_id = DynamicModelMultipleChoiceField(
queryset=ClusterGroup.objects.all(),
@ -462,10 +461,7 @@ class JournalEntryFilterForm(NetBoxModelFilterSetForm):
created_by_id = DynamicModelMultipleChoiceField(
queryset=get_user_model().objects.all(),
required=False,
label=_('User'),
widget=APISelectMultiple(
api_url='/api/users/users/',
)
label=_('User')
)
assigned_object_type_id = DynamicModelMultipleChoiceField(
queryset=ContentType.objects.all(),
@ -508,10 +504,7 @@ class ObjectChangeFilterForm(SavedFiltersMixin, FilterForm):
user_id = DynamicModelMultipleChoiceField(
queryset=get_user_model().objects.all(),
required=False,
label=_('User'),
widget=APISelectMultiple(
api_url='/api/users/users/',
)
label=_('User')
)
changed_object_type_id = DynamicModelMultipleChoiceField(
queryset=ContentType.objects.all(),

View File

@ -142,10 +142,12 @@ class CustomLinkForm(forms.ModelForm):
}
help_texts = {
'link_text': _(
"Jinja2 template code for the link text. Reference the object as <code>{{ object }}</code>. Links "
"Jinja2 template code for the link text. Reference the object as {example}. Links "
"which render as empty text will not be displayed."
),
'link_url': _("Jinja2 template code for the link URL. Reference the object as <code>{{ object }}</code>."),
).format(example="<code>{{ object }}</code>"),
'link_url': _(
"Jinja2 template code for the link URL. Reference the object as {example}."
).format(example="<code>{{ object }}</code>"),
}

View File

@ -1,65 +0,0 @@
import time
from django.core.management.base import BaseCommand
from django.utils import timezone
from core.choices import JobStatusChoices
from core.models import Job
from extras.models import ReportModule
from extras.reports import run_report
class Command(BaseCommand):
help = "Run a report to validate data in NetBox"
def add_arguments(self, parser):
parser.add_argument('reports', nargs='+', help="Report(s) to run")
def handle(self, *args, **options):
for module in ReportModule.objects.all():
for report in module.reports.values():
if module.name in options['reports'] or report.full_name in options['reports']:
# Run the report and create a new Job
self.stdout.write(
"[{:%H:%M:%S}] Running {}...".format(timezone.now(), report.full_name)
)
job = Job.enqueue(
run_report,
instance=module,
name=report.class_name,
job_timeout=report.job_timeout
)
# Wait on the job to finish
while job.status not in JobStatusChoices.TERMINAL_STATE_CHOICES:
time.sleep(1)
job = Job.objects.get(pk=job.pk)
# Report on success/failure
if job.status == JobStatusChoices.STATUS_FAILED:
status = self.style.ERROR('FAILED')
elif job == JobStatusChoices.STATUS_ERRORED:
status = self.style.ERROR('ERRORED')
else:
status = self.style.SUCCESS('SUCCESS')
for test_name, attrs in job.data.items():
self.stdout.write(
"\t{}: {} success, {} info, {} warning, {} failure".format(
test_name, attrs['success'], attrs['info'], attrs['warning'], attrs['failure']
)
)
self.stdout.write(
"[{:%H:%M:%S}] {}: {}".format(timezone.now(), report.full_name, status)
)
self.stdout.write(
"[{:%H:%M:%S}] {}: Duration {}".format(timezone.now(), report.full_name, job.duration)
)
# Wrap things up
self.stdout.write(
"[{:%H:%M:%S}] Finished".format(timezone.now())
)

View File

@ -10,7 +10,6 @@ from django.db import transaction
from core.choices import JobStatusChoices
from core.models import Job
from extras.api.serializers import ScriptOutputSerializer
from extras.context_managers import event_tracking
from extras.scripts import get_module_and_script
from extras.signals import clear_events
@ -34,6 +33,7 @@ class Command(BaseCommand):
parser.add_argument('script', help="Script to run")
def handle(self, *args, **options):
def _run_script():
"""
Core script execution task. We capture this within a subfunction to allow for conditionally wrapping it with
@ -48,7 +48,7 @@ class Command(BaseCommand):
except AbortTransaction:
script.log_info("Database changes have been reverted automatically.")
clear_events.send(request)
job.data = ScriptOutputSerializer(script).data
job.data = script.get_job_data()
job.terminate()
except Exception as e:
stacktrace = traceback.format_exc()
@ -58,9 +58,17 @@ class Command(BaseCommand):
script.log_info("Database changes have been reverted due to error.")
logger.error(f"Exception raised during script execution: {e}")
clear_events.send(request)
job.data = ScriptOutputSerializer(script).data
job.data = script.get_job_data()
job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))
# Print any test method results
for test_name, attrs in job.data['tests'].items():
self.stdout.write(
"\t{}: {} success, {} info, {} warning, {} failure".format(
test_name, attrs['success'], attrs['info'], attrs['warning'], attrs['failure']
)
)
logger.info(f"Script completed in {job.duration}")
User = get_user_model()
@ -69,6 +77,7 @@ class Command(BaseCommand):
script = options['script']
loglevel = options['loglevel']
commit = options['commit']
try:
data = json.loads(options['data'])
except TypeError:

View File

@ -0,0 +1,21 @@
# Generated by Django 4.2.9 on 2024-01-19 19:46
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('extras', '0105_customfield_min_max_values'),
]
operations = [
migrations.AlterField(
model_name='bookmark',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
]

View File

@ -0,0 +1,31 @@
from django.db import migrations
def convert_reportmodule_jobs(apps, schema_editor):
ContentType = apps.get_model('contenttypes', 'ContentType')
Job = apps.get_model('core', 'Job')
# Convert all ReportModule jobs to ScriptModule jobs
if reportmodule_ct := ContentType.objects.filter(app_label='extras', model='reportmodule').first():
scriptmodule_ct = ContentType.objects.get(app_label='extras', model='scriptmodule')
Job.objects.filter(object_type_id=reportmodule_ct.id).update(object_type_id=scriptmodule_ct.id)
class Migration(migrations.Migration):
dependencies = [
('extras', '0106_bookmark_user_cascade_deletion'),
]
operations = [
migrations.RunPython(
code=convert_reportmodule_jobs,
reverse_code=migrations.RunPython.noop
),
migrations.DeleteModel(
name='Report',
),
migrations.DeleteModel(
name='ReportModule',
),
]

View File

@ -3,7 +3,6 @@ from .configs import *
from .customfields import *
from .dashboard import *
from .models import *
from .reports import *
from .scripts import *
from .search import *
from .staging import *

View File

@ -8,6 +8,16 @@ __all__ = (
class PythonModuleMixin:
def get_jobs(self, name):
"""
Returns a list of Jobs associated with this specific script or report module
:param name: The class name of the script or report
:return: List of Jobs associated with this
"""
return self.jobs.filter(
name=name
)
@property
def path(self):
return os.path.splitext(self.file_path)[0]

View File

@ -771,7 +771,7 @@ class Bookmark(models.Model):
)
user = models.ForeignKey(
to=settings.AUTH_USER_MODEL,
on_delete=models.PROTECT
on_delete=models.CASCADE
)
objects = RestrictedQuerySet.as_manager()

View File

@ -1,80 +0,0 @@
import inspect
import logging
from functools import cached_property
from django.db import models
from django.urls import reverse
from django.utils.translation import gettext_lazy as _
from core.choices import ManagedFileRootPathChoices
from core.models import ManagedFile
from extras.utils import is_report
from netbox.models.features import JobsMixin, EventRulesMixin
from utilities.querysets import RestrictedQuerySet
from .mixins import PythonModuleMixin
logger = logging.getLogger('netbox.reports')
__all__ = (
'Report',
'ReportModule',
)
class Report(EventRulesMixin, models.Model):
"""
Dummy model used to generate permissions for reports. Does not exist in the database.
"""
class Meta:
managed = False
class ReportModuleManager(models.Manager.from_queryset(RestrictedQuerySet)):
def get_queryset(self):
return super().get_queryset().filter(file_root=ManagedFileRootPathChoices.REPORTS)
class ReportModule(PythonModuleMixin, JobsMixin, ManagedFile):
"""
Proxy model for report module files.
"""
objects = ReportModuleManager()
class Meta:
proxy = True
verbose_name = _('report module')
verbose_name_plural = _('report modules')
def get_absolute_url(self):
return reverse('extras:report_list')
def __str__(self):
return self.python_name
@cached_property
def reports(self):
def _get_name(cls):
# For child objects in submodules use the full import path w/o the root module as the name
return cls.full_name.split(".", maxsplit=1)[1]
try:
module = self.get_module()
except (ImportError, SyntaxError) as e:
logger.error(f"Unable to load report module {self.name}, exception: {e}")
return {}
reports = {}
ordered = getattr(module, 'report_order', [])
for cls in ordered:
reports[_get_name(cls)] = cls
for name, cls in inspect.getmembers(module, is_report):
if cls not in ordered:
reports[_get_name(cls)] = cls
return reports
def save(self, *args, **kwargs):
self.file_root = ManagedFileRootPathChoices.REPORTS
return super().save(*args, **kwargs)

View File

@ -3,6 +3,7 @@ import logging
from functools import cached_property
from django.db import models
from django.db.models import Q
from django.urls import reverse
from django.utils.translation import gettext_lazy as _
@ -32,7 +33,8 @@ class Script(EventRulesMixin, models.Model):
class ScriptModuleManager(models.Manager.from_queryset(RestrictedQuerySet)):
def get_queryset(self):
return super().get_queryset().filter(file_root=ManagedFileRootPathChoices.SCRIPTS)
return super().get_queryset().filter(
Q(file_root=ManagedFileRootPathChoices.SCRIPTS) | Q(file_root=ManagedFileRootPathChoices.REPORTS))
class ScriptModule(PythonModuleMixin, JobsMixin, ManagedFile):

View File

@ -120,34 +120,29 @@ class ConfigContextModelQuerySet(RestrictedQuerySet):
if self.model._meta.model_name == 'device':
base_query.add((Q(locations=OuterRef('location')) | Q(locations=None)), Q.AND)
base_query.add((Q(device_types=OuterRef('device_type')) | Q(device_types=None)), Q.AND)
base_query.add((Q(roles=OuterRef('role')) | Q(roles=None)), Q.AND)
base_query.add((Q(sites=OuterRef('site')) | Q(sites=None)), Q.AND)
region_field = 'site__region'
sitegroup_field = 'site__group'
elif self.model._meta.model_name == 'virtualmachine':
base_query.add((Q(roles=OuterRef('role')) | Q(roles=None)), Q.AND)
base_query.add((Q(sites=OuterRef('cluster__site')) | Q(sites=None)), Q.AND)
base_query.add(Q(device_types=None), Q.AND)
region_field = 'cluster__site__region'
sitegroup_field = 'cluster__site__group'
base_query.add((Q(roles=OuterRef('role')) | Q(roles=None)), Q.AND)
base_query.add((Q(sites=OuterRef('site')) | Q(sites=None)), Q.AND)
base_query.add(
(Q(
regions__tree_id=OuterRef(f'{region_field}__tree_id'),
regions__level__lte=OuterRef(f'{region_field}__level'),
regions__lft__lte=OuterRef(f'{region_field}__lft'),
regions__rght__gte=OuterRef(f'{region_field}__rght'),
regions__tree_id=OuterRef('site__region__tree_id'),
regions__level__lte=OuterRef('site__region__level'),
regions__lft__lte=OuterRef('site__region__lft'),
regions__rght__gte=OuterRef('site__region__rght'),
) | Q(regions=None)),
Q.AND
)
base_query.add(
(Q(
site_groups__tree_id=OuterRef(f'{sitegroup_field}__tree_id'),
site_groups__level__lte=OuterRef(f'{sitegroup_field}__level'),
site_groups__lft__lte=OuterRef(f'{sitegroup_field}__lft'),
site_groups__rght__gte=OuterRef(f'{sitegroup_field}__rght'),
site_groups__tree_id=OuterRef('site__group__tree_id'),
site_groups__level__lte=OuterRef('site__group__level'),
site_groups__lft__lte=OuterRef('site__group__lft'),
site_groups__rght__gte=OuterRef('site__group__rght'),
) | Q(site_groups=None)),
Q.AND
)

View File

@ -1,248 +1,33 @@
import inspect
import logging
import traceback
from datetime import timedelta
from django.utils import timezone
from django.utils.functional import classproperty
from django_rq import job
from core.choices import JobStatusChoices
from core.models import Job
from .choices import LogLevelChoices
from .models import ReportModule
from .scripts import BaseScript
__all__ = (
'Report',
'get_module_and_report',
'run_report',
)
logger = logging.getLogger(__name__)
def get_module_and_report(module_name, report_name):
module = ReportModule.objects.get(file_path=f'{module_name}.py')
report = module.reports.get(report_name)()
return module, report
@job('default')
def run_report(job, *args, **kwargs):
"""
Helper function to call the run method on a report. This is needed to get around the inability to pickle an instance
method for queueing into the background processor.
"""
job.start()
module = ReportModule.objects.get(pk=job.object_id)
report = module.reports.get(job.name)()
try:
report.run(job)
except Exception as e:
job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))
logging.error(f"Error during execution of report {job.name}")
finally:
# Schedule the next job if an interval has been set
if job.interval:
new_scheduled_time = job.scheduled + timedelta(minutes=job.interval)
Job.enqueue(
run_report,
instance=job.object,
name=job.name,
user=job.user,
job_timeout=report.job_timeout,
schedule_at=new_scheduled_time,
interval=job.interval
)
class Report(object):
"""
NetBox users can extend this object to write custom reports to be used for validating data within NetBox. Each
report must have one or more test methods named `test_*`.
The `_results` attribute of a completed report will take the following form:
{
'test_bar': {
'failures': 42,
'log': [
(<datetime>, <level>, <object>, <message>),
...
]
},
'test_foo': {
'failures': 0,
'log': [
(<datetime>, <level>, <object>, <message>),
...
]
}
}
"""
description = None
scheduling_enabled = True
job_timeout = None
def __init__(self):
self._results = {}
self.active_test = None
self.failed = False
self.logger = logging.getLogger(f"netbox.reports.{self.__module__}.{self.__class__.__name__}")
# Compile test methods and initialize results skeleton
test_methods = []
for method in dir(self):
if method.startswith('test_') and callable(getattr(self, method)):
test_methods.append(method)
self._results[method] = {
'success': 0,
'info': 0,
'warning': 0,
'failure': 0,
'log': [],
}
self.test_methods = test_methods
@classproperty
def module(self):
return self.__module__
@classproperty
def class_name(self):
return self.__name__
@classproperty
def full_name(self):
return f'{self.module}.{self.class_name}'
@property
def name(self):
"""
Override this attribute to set a custom display name.
"""
return self.class_name
@property
def filename(self):
return inspect.getfile(self.__class__)
@property
def source(self):
return inspect.getsource(self.__class__)
@property
def is_valid(self):
"""
Indicates whether the report can be run.
"""
return bool(self.test_methods)
class Report(BaseScript):
#
# Logging methods
# Legacy logging methods for Reports
#
def _log(self, obj, message, level=LogLevelChoices.LOG_DEFAULT):
"""
Log a message from a test method. Do not call this method directly; use one of the log_* wrappers below.
"""
if level not in LogLevelChoices.values():
raise Exception(f"Unknown logging level: {level}")
self._results[self.active_test]['log'].append((
timezone.now().isoformat(),
level,
str(obj) if obj else None,
obj.get_absolute_url() if hasattr(obj, 'get_absolute_url') else None,
message,
))
# There is no generic log() equivalent on BaseScript
def log(self, message):
"""
Log a message which is not associated with a particular object.
"""
self._log(None, message, level=LogLevelChoices.LOG_DEFAULT)
self.logger.info(message)
self._log(message, None, level=LogLevelChoices.LOG_DEFAULT)
def log_success(self, obj, message=None):
"""
Record a successful test against an object. Logging a message is optional.
"""
if message:
self._log(obj, message, level=LogLevelChoices.LOG_SUCCESS)
self._results[self.active_test]['success'] += 1
self.logger.info(f"Success | {obj}: {message}")
def log_success(self, obj=None, message=None):
super().log_success(message, obj)
def log_info(self, obj, message):
"""
Log an informational message.
"""
self._log(obj, message, level=LogLevelChoices.LOG_INFO)
self._results[self.active_test]['info'] += 1
self.logger.info(f"Info | {obj}: {message}")
def log_info(self, obj=None, message=None):
super().log_info(message, obj)
def log_warning(self, obj, message):
"""
Log a warning.
"""
self._log(obj, message, level=LogLevelChoices.LOG_WARNING)
self._results[self.active_test]['warning'] += 1
self.logger.info(f"Warning | {obj}: {message}")
def log_warning(self, obj=None, message=None):
super().log_warning(message, obj)
def log_failure(self, obj, message):
"""
Log a failure. Calling this method will automatically mark the report as failed.
"""
self._log(obj, message, level=LogLevelChoices.LOG_FAILURE)
self._results[self.active_test]['failure'] += 1
self.logger.info(f"Failure | {obj}: {message}")
self.failed = True
def log_failure(self, obj=None, message=None):
super().log_failure(message, obj)
#
# Run methods
#
def run(self, job):
"""
Run the report and save its results. Each test method will be executed in order.
"""
self.logger.info(f"Running report")
# Perform any post-run tasks
self.pre_run()
try:
for method_name in self.test_methods:
self.active_test = method_name
test_method = getattr(self, method_name)
test_method()
job.data = self._results
if self.failed:
self.logger.warning("Report failed")
job.terminate(status=JobStatusChoices.STATUS_FAILED)
else:
self.logger.info("Report completed successfully")
job.terminate()
except Exception as e:
stacktrace = traceback.format_exc()
self.log_failure(None, f"An exception occurred: {type(e).__name__}: {e} <pre>{stacktrace}</pre>")
logger.error(f"Exception raised during report execution: {e}")
job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))
# Perform any post-run tasks
self.post_run()
def pre_run(self):
"""
Extend this method to include any tasks which should execute *before* the report is run.
"""
pass
def post_run(self):
"""
Extend this method to include any tasks which should execute *after* the report is run.
"""
pass
# Added in v4.0 to avoid confusion with the log_debug() method provided by BaseScript
def log_debug(self, obj=None, message=None):
super().log_debug(message, obj)

View File

@ -10,11 +10,12 @@ from django import forms
from django.conf import settings
from django.core.validators import RegexValidator
from django.db import transaction
from django.utils import timezone
from django.utils.functional import classproperty
from django.utils.translation import gettext as _
from core.choices import JobStatusChoices
from core.models import Job
from extras.api.serializers import ScriptOutputSerializer
from extras.choices import LogLevelChoices
from extras.models import ScriptModule
from extras.signals import clear_events
@ -25,6 +26,8 @@ from utilities.forms import add_blank_choice
from utilities.forms.fields import DynamicModelChoiceField, DynamicModelMultipleChoiceField
from .context_managers import event_tracking
from .forms import ScriptForm
from .utils import is_report
__all__ = (
'BaseScript',
@ -270,17 +273,28 @@ class BaseScript:
pass
def __init__(self):
self.messages = [] # Primary script log
self.tests = {} # Mapping of logs for test methods
self.output = ''
self.failed = False
self._current_test = None # Tracks the current test method being run (if any)
# Initiate the log
self.logger = logging.getLogger(f"netbox.scripts.{self.__module__}.{self.__class__.__name__}")
self.log = []
# Declare the placeholder for the current request
self.request = None
# Grab some info about the script
self.filename = inspect.getfile(self.__class__)
self.source = inspect.getsource(self.__class__)
# Compile test methods and initialize results skeleton
for method in dir(self):
if method.startswith('test_') and callable(getattr(self, method)):
self.tests[method] = {
LogLevelChoices.LOG_SUCCESS: 0,
LogLevelChoices.LOG_INFO: 0,
LogLevelChoices.LOG_WARNING: 0,
LogLevelChoices.LOG_FAILURE: 0,
'log': [],
}
def __str__(self):
return self.name
@ -331,6 +345,14 @@ class BaseScript:
def scheduling_enabled(self):
return getattr(self.Meta, 'scheduling_enabled', True)
@property
def filename(self):
return inspect.getfile(self.__class__)
@property
def source(self):
return inspect.getsource(self.__class__)
@classmethod
def _get_vars(cls):
vars = {}
@ -356,9 +378,28 @@ class BaseScript:
return ordered_vars
def run(self, data, commit):
raise NotImplementedError("The script must define a run() method.")
"""
Override this method with custom script logic.
"""
# Backward compatibility for legacy Reports
self.pre_run()
self.run_tests()
self.post_run()
def get_job_data(self):
"""
Return a dictionary of data to attach to the script's Job.
"""
return {
'log': self.messages,
'output': self.output,
'tests': self.tests,
}
#
# Form rendering
#
def get_fieldsets(self):
fieldsets = []
@ -397,29 +438,66 @@ class BaseScript:
return form
#
# Logging
#
def log_debug(self, message):
self.logger.log(logging.DEBUG, message)
self.log.append((LogLevelChoices.LOG_DEFAULT, str(message)))
def _log(self, message, obj=None, level=LogLevelChoices.LOG_DEFAULT):
"""
Log a message. Do not call this method directly; use one of the log_* wrappers below.
"""
if level not in LogLevelChoices.values():
raise ValueError(f"Invalid logging level: {level}")
def log_success(self, message):
self.logger.log(logging.INFO, message) # No syslog equivalent for SUCCESS
self.log.append((LogLevelChoices.LOG_SUCCESS, str(message)))
# A test method is currently active, so log the message using legacy Report logging
if self._current_test:
def log_info(self, message):
self.logger.log(logging.INFO, message)
self.log.append((LogLevelChoices.LOG_INFO, str(message)))
# TODO: Use a dataclass for test method logs
self.tests[self._current_test]['log'].append((
timezone.now().isoformat(),
level,
str(obj) if obj else None,
obj.get_absolute_url() if hasattr(obj, 'get_absolute_url') else None,
str(message),
))
def log_warning(self, message):
self.logger.log(logging.WARNING, message)
self.log.append((LogLevelChoices.LOG_WARNING, str(message)))
# Increment the event counter for this level
if level in self.tests[self._current_test]:
self.tests[self._current_test][level] += 1
def log_failure(self, message):
self.logger.log(logging.ERROR, message)
self.log.append((LogLevelChoices.LOG_FAILURE, str(message)))
elif message:
# Record to the script's log
self.messages.append({
'time': timezone.now().isoformat(),
'status': level,
'message': str(message),
})
# Record to the system log
if obj:
message = f"{obj}: {message}"
self.logger.log(LogLevelChoices.SYSTEM_LEVELS[level], message)
def log_debug(self, message, obj=None):
self._log(message, obj, level=LogLevelChoices.LOG_DEBUG)
def log_success(self, message, obj=None):
self._log(message, obj, level=LogLevelChoices.LOG_SUCCESS)
def log_info(self, message, obj=None):
self._log(message, obj, level=LogLevelChoices.LOG_INFO)
def log_warning(self, message, obj=None):
self._log(message, obj, level=LogLevelChoices.LOG_WARNING)
def log_failure(self, message, obj=None):
self._log(message, obj, level=LogLevelChoices.LOG_FAILURE)
self.failed = True
#
# Convenience functions
#
def load_yaml(self, filename):
"""
@ -446,6 +524,39 @@ class BaseScript:
return data
#
# Legacy Report functionality
#
def run_tests(self):
"""
Run the report and save its results. Each test method will be executed in order.
"""
self.logger.info(f"Running report")
try:
for test_name in self.tests:
self._current_test = test_name
test_method = getattr(self, test_name)
test_method()
self._current_test = None
except Exception as e:
self._current_test = None
self.post_run()
raise e
def pre_run(self):
"""
Legacy method for operations performed immediately prior to running a Report.
"""
pass
def post_run(self):
"""
Legacy method for operations performed immediately after running a Report.
"""
pass
class Script(BaseScript):
"""
@ -500,7 +611,16 @@ def run_script(data, job, request=None, commit=True, **kwargs):
# Add the current request as a property of the script
script.request = request
def _run_script():
def set_job_data(script):
job.data = {
'log': script.messages,
'output': script.output,
'tests': script.tests,
}
return job
def _run_script(job):
"""
Core script execution task. We capture this within a subfunction to allow for conditionally wrapping it with
the event_tracking context manager (which is bypassed if commit == False).
@ -508,25 +628,39 @@ def run_script(data, job, request=None, commit=True, **kwargs):
try:
try:
with transaction.atomic():
script.output = script.run(data=data, commit=commit)
script.output = script.run(data, commit)
if not commit:
raise AbortTransaction()
except AbortTransaction:
script.log_info("Database changes have been reverted automatically.")
script.log_info(message=_("Database changes have been reverted automatically."))
if request:
clear_events.send(request)
job.data = ScriptOutputSerializer(script).data
job.data = script.get_job_data()
if script.failed:
logger.warning(f"Script failed")
job.terminate(status=JobStatusChoices.STATUS_FAILED)
else:
job.terminate()
except Exception as e:
if type(e) is AbortScript:
script.log_failure(f"Script aborted with error: {e}")
msg = _("Script aborted with error: ") + str(e)
if is_report(type(script)):
script.log_failure(message=msg)
else:
script.log_failure(msg)
logger.error(f"Script aborted with error: {e}")
else:
stacktrace = traceback.format_exc()
script.log_failure(f"An exception occurred: `{type(e).__name__}: {e}`\n```\n{stacktrace}\n```")
script.log_failure(
message=_("An exception occurred: ") + f"`{type(e).__name__}: {e}`\n```\n{stacktrace}\n```"
)
logger.error(f"Exception raised during script execution: {e}")
script.log_info("Database changes have been reverted due to error.")
job.data = ScriptOutputSerializer(script).data
script.log_info(message=_("Database changes have been reverted due to error."))
job.data = script.get_job_data()
job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))
if request:
clear_events.send(request)
@ -537,9 +671,9 @@ def run_script(data, job, request=None, commit=True, **kwargs):
# change logging, event rules, etc.
if commit:
with event_tracking(request):
_run_script()
_run_script(job)
else:
_run_script()
_run_script(job)
# Schedule the next job if an interval has been set
if job.interval:

View File

@ -68,18 +68,20 @@ def handle_changed_object(sender, instance, **kwargs):
else:
return
# Record an ObjectChange if applicable
if m2m_changed:
ObjectChange.objects.filter(
# Create/update an ObejctChange record for this change
objectchange = instance.to_objectchange(action)
# If this is a many-to-many field change, check for a previous ObjectChange instance recorded
# for this object by this request and update it
if m2m_changed and (
prev_change := ObjectChange.objects.filter(
changed_object_type=ContentType.objects.get_for_model(instance),
changed_object_id=instance.pk,
request_id=request.id
).update(
postchange_data=instance.to_objectchange(action).postchange_data
)
else:
objectchange = instance.to_objectchange(action)
if objectchange and objectchange.has_changes:
).first()
):
prev_change.postchange_data = objectchange.postchange_data
prev_change.save()
elif objectchange and objectchange.has_changes:
objectchange.user = request.user
objectchange.request_id = request.id
objectchange.save()
@ -251,7 +253,8 @@ def process_job_start_event_rules(sender, **kwargs):
Process event rules for jobs starting.
"""
event_rules = EventRule.objects.filter(type_job_start=True, enabled=True, content_types=sender.object_type)
process_event_rules(event_rules, sender.object_type.model, EVENT_JOB_START, sender.data, sender.user.username)
username = sender.user.username if sender.user else None
process_event_rules(event_rules, sender.object_type.model, EVENT_JOB_START, sender.data, username)
@receiver(job_end)
@ -260,4 +263,5 @@ def process_job_end_event_rules(sender, **kwargs):
Process event rules for jobs terminating.
"""
event_rules = EventRule.objects.filter(type_job_end=True, enabled=True, content_types=sender.object_type)
process_event_rules(event_rules, sender.object_type.model, EVENT_JOB_END, sender.data, sender.user.username)
username = sender.user.username if sender.user else None
process_event_rules(event_rules, sender.object_type.model, EVENT_JOB_END, sender.data, username)

View File

@ -14,7 +14,6 @@ from extras.reports import Report
from extras.scripts import BooleanVar, IntegerVar, Script, StringVar
from utilities.testing import APITestCase, APIViewTestCases
User = get_user_model()
@ -251,6 +250,23 @@ class CustomFieldChoiceSetTest(APIViewTestCases.APIViewTestCase):
)
CustomFieldChoiceSet.objects.bulk_create(choice_sets)
def test_invalid_choice_items(self):
"""
Attempting to define each choice as a single-item list should return a 400 error.
"""
self.add_permissions('extras.add_customfieldchoiceset')
data = {
"name": "test",
"extra_choices": [
["choice1"],
["choice2"],
["choice3"],
]
}
response = self.client.post(self._get_list_url(), data, format='json', **self.header)
self.assertEqual(response.status_code, 400)
class CustomLinkTest(APIViewTestCases.APIViewTestCase):
model = CustomLink
@ -730,37 +746,6 @@ class ConfigTemplateTest(APIViewTestCases.APIViewTestCase):
ConfigTemplate.objects.bulk_create(config_templates)
class ReportTest(APITestCase):
class TestReport(Report):
def test_foo(self):
self.log_success(None, "Report completed")
@classmethod
def setUpTestData(cls):
ReportModule.objects.create(
file_root=ManagedFileRootPathChoices.REPORTS,
file_path='/var/tmp/report.py'
)
def get_test_report(self, *args):
return ReportModule.objects.first(), self.TestReport()
def setUp(self):
super().setUp()
# Monkey-patch the API viewset's _get_report() method to return our test Report above
from extras.api.views import ReportViewSet
ReportViewSet._get_report = self.get_test_report
def test_get_report(self):
url = reverse('extras-api:report-detail', kwargs={'pk': None})
response = self.client.get(url, **self.header)
self.assertEqual(response.data['name'], self.TestReport.__name__)
class ScriptTest(APITestCase):
class TestScript(Script):

View File

@ -270,7 +270,12 @@ class ConfigContextTest(TestCase):
tag = Tag.objects.first()
cluster_type = ClusterType.objects.create(name="Cluster Type")
cluster_group = ClusterGroup.objects.create(name="Cluster Group")
cluster = Cluster.objects.create(name="Cluster", group=cluster_group, type=cluster_type)
cluster = Cluster.objects.create(
name="Cluster",
group=cluster_group,
type=cluster_type,
site=site,
)
region_context = ConfigContext.objects.create(
name="region",
@ -354,6 +359,41 @@ class ConfigContextTest(TestCase):
annotated_queryset = VirtualMachine.objects.filter(name=virtual_machine.name).annotate_config_context_data()
self.assertEqual(virtual_machine.get_config_context(), annotated_queryset[0].get_config_context())
def test_virtualmachine_site_context(self):
"""
Check that config context associated with a site applies to a VM whether the VM is assigned
directly to that site or via its cluster.
"""
site = Site.objects.first()
cluster_type = ClusterType.objects.create(name="Cluster Type")
cluster = Cluster.objects.create(name="Cluster", type=cluster_type, site=site)
vm_role = DeviceRole.objects.first()
# Create a ConfigContext associated with the site
context = ConfigContext.objects.create(
name="context1",
weight=100,
data={"foo": True}
)
context.sites.add(site)
# Create one VM assigned directly to the site, and one assigned via the cluster
vm1 = VirtualMachine.objects.create(name="VM 1", site=site, role=vm_role)
vm2 = VirtualMachine.objects.create(name="VM 2", cluster=cluster, role=vm_role)
# Check that their individually-rendered config contexts are identical
self.assertEqual(
vm1.get_config_context(),
vm2.get_config_context()
)
# Check that their annotated config contexts are identical
vms = VirtualMachine.objects.filter(pk__in=(vm1.pk, vm2.pk)).annotate_config_context_data()
self.assertEqual(
vms[0].get_config_context(),
vms[1].get_config_context()
)
def test_multiple_tags_return_distinct_objects(self):
"""
Tagged items use a generic relationship, which results in duplicate rows being returned when queried.

View File

@ -116,15 +116,6 @@ urlpatterns = [
path('dashboard/widgets/<uuid:id>/configure/', views.DashboardWidgetConfigView.as_view(), name='dashboardwidget_config'),
path('dashboard/widgets/<uuid:id>/delete/', views.DashboardWidgetDeleteView.as_view(), name='dashboardwidget_delete'),
# Reports
path('reports/', views.ReportListView.as_view(), name='report_list'),
path('reports/add/', views.ReportModuleCreateView.as_view(), name='reportmodule_add'),
path('reports/results/<int:job_pk>/', views.ReportResultView.as_view(), name='report_result'),
path('reports/<int:pk>/', include(get_model_urls('extras', 'reportmodule'))),
path('reports/<str:module>/<str:name>/', views.ReportView.as_view(), name='report'),
path('reports/<str:module>/<str:name>/source/', views.ReportSourceView.as_view(), name='report_source'),
path('reports/<str:module>/<str:name>/jobs/', views.ReportJobsView.as_view(), name='report_jobs'),
# Scripts
path('scripts/', views.ScriptListView.as_view(), name='script_list'),
path('scripts/add/', views.ScriptModuleCreateView.as_view(), name='scriptmodule_add'),

View File

@ -49,11 +49,12 @@ def register_features(model, features):
def is_script(obj):
"""
Returns True if the object is a Script.
Returns True if the object is a Script or Report.
"""
from .reports import Report
from .scripts import Script
try:
return issubclass(obj, Script) and obj != Script
return (issubclass(obj, Report) and obj != Report) or (issubclass(obj, Script) and obj != Script)
except TypeError:
return False

View File

@ -9,7 +9,7 @@ from django.urls import reverse
from django.utils.translation import gettext as _
from django.views.generic import View
from core.choices import JobStatusChoices, ManagedFileRootPathChoices
from core.choices import ManagedFileRootPathChoices
from core.forms import ManagedFileForm
from core.models import Job
from core.tables import JobTable
@ -24,9 +24,7 @@ from utilities.templatetags.builtins.filters import render_markdown
from utilities.utils import copy_safe_request, count_related, get_viewname, normalize_querydict, shallow_compare_dict
from utilities.views import ContentTypePermissionRequiredMixin, register_model_view
from . import filtersets, forms, tables
from .forms.reports import ReportForm
from .models import *
from .reports import run_report
from .scripts import run_script
@ -1006,185 +1004,6 @@ class DashboardWidgetDeleteView(LoginRequiredMixin, View):
return redirect(reverse('home'))
#
# Reports
#
@register_model_view(ReportModule, 'edit')
class ReportModuleCreateView(generic.ObjectEditView):
queryset = ReportModule.objects.all()
form = ManagedFileForm
def alter_object(self, obj, *args, **kwargs):
obj.file_root = ManagedFileRootPathChoices.REPORTS
return obj
@register_model_view(ReportModule, 'delete')
class ReportModuleDeleteView(generic.ObjectDeleteView):
queryset = ReportModule.objects.all()
default_return_url = 'extras:report_list'
class ReportListView(ContentTypePermissionRequiredMixin, View):
"""
Retrieve all the available reports from disk and the recorded Job (if any) for each.
"""
def get_required_permission(self):
return 'extras.view_report'
def get(self, request):
report_modules = ReportModule.objects.restrict(request.user)
return render(request, 'extras/report_list.html', {
'model': ReportModule,
'report_modules': report_modules,
})
def get_report_module(module, request):
return get_object_or_404(ReportModule.objects.restrict(request.user), file_path__regex=f"^{module}\\.")
class ReportView(ContentTypePermissionRequiredMixin, View):
"""
Display a single Report and its associated Job (if any).
"""
def get_required_permission(self):
return 'extras.view_report'
def get(self, request, module, name):
module = get_report_module(module, request)
report = module.reports[name]()
object_type = ContentType.objects.get(app_label='extras', model='reportmodule')
report.result = Job.objects.filter(
object_type=object_type,
object_id=module.pk,
name=report.name,
status__in=JobStatusChoices.TERMINAL_STATE_CHOICES
).first()
return render(request, 'extras/report.html', {
'module': module,
'report': report,
'form': ReportForm(scheduling_enabled=report.scheduling_enabled),
})
def post(self, request, module, name):
if not request.user.has_perm('extras.run_report'):
return HttpResponseForbidden()
module = get_report_module(module, request)
report = module.reports[name]()
form = ReportForm(request.POST, scheduling_enabled=report.scheduling_enabled)
if form.is_valid():
# Allow execution only if RQ worker process is running
if not get_workers_for_queue('default'):
messages.error(request, "Unable to run report: RQ worker process not running.")
return render(request, 'extras/report.html', {
'report': report,
})
# Run the Report. A new Job is created.
job = Job.enqueue(
run_report,
instance=module,
name=report.class_name,
user=request.user,
schedule_at=form.cleaned_data.get('schedule_at'),
interval=form.cleaned_data.get('interval'),
job_timeout=report.job_timeout
)
return redirect('extras:report_result', job_pk=job.pk)
return render(request, 'extras/report.html', {
'module': module,
'report': report,
'form': form,
})
class ReportSourceView(ContentTypePermissionRequiredMixin, View):
def get_required_permission(self):
return 'extras.view_report'
def get(self, request, module, name):
module = get_report_module(module, request)
report = module.reports[name]()
return render(request, 'extras/report/source.html', {
'module': module,
'report': report,
'tab': 'source',
})
class ReportJobsView(ContentTypePermissionRequiredMixin, View):
def get_required_permission(self):
return 'extras.view_report'
def get(self, request, module, name):
module = get_report_module(module, request)
report = module.reports[name]()
object_type = ContentType.objects.get(app_label='extras', model='reportmodule')
jobs = Job.objects.filter(
object_type=object_type,
object_id=module.pk,
name=report.class_name
)
jobs_table = JobTable(
data=jobs,
orderable=False,
user=request.user
)
jobs_table.configure(request)
return render(request, 'extras/report/jobs.html', {
'module': module,
'report': report,
'table': jobs_table,
'tab': 'jobs',
})
class ReportResultView(ContentTypePermissionRequiredMixin, View):
"""
Display a Job pertaining to the execution of a Report.
"""
def get_required_permission(self):
return 'extras.view_report'
def get(self, request, job_pk):
object_type = ContentType.objects.get_by_natural_key(app_label='extras', model='reportmodule')
job = get_object_or_404(Job.objects.all(), pk=job_pk, object_type=object_type)
module = job.object
report = module.reports[job.name]
# If this is an HTMX request, return only the result HTML
if request.htmx:
response = render(request, 'extras/htmx/report_result.html', {
'report': report,
'job': job,
})
if job.completed or not job.started:
response.status_code = 286
return response
return render(request, 'extras/report_result.html', {
'report': report,
'job': job,
})
#
# Scripts
#
@ -1231,19 +1050,11 @@ class ScriptView(ContentTypePermissionRequiredMixin, View):
def get(self, request, module, name):
module = get_script_module(module, request)
script = module.scripts[name]()
jobs = module.get_jobs(script.class_name)
form = script.as_form(initial=normalize_querydict(request.GET))
# Look for a pending Job (use the latest one by creation timestamp)
object_type = ContentType.objects.get(app_label='extras', model='scriptmodule')
script.result = Job.objects.filter(
object_type=object_type,
object_id=module.pk,
name=script.name,
).exclude(
status__in=JobStatusChoices.TERMINAL_STATE_CHOICES
).first()
return render(request, 'extras/script.html', {
'job_count': jobs.count(),
'module': module,
'script': script,
'form': form,
@ -1255,6 +1066,7 @@ class ScriptView(ContentTypePermissionRequiredMixin, View):
module = get_script_module(module, request)
script = module.scripts[name]()
jobs = module.get_jobs(script.class_name)
form = script.as_form(request.POST, request.FILES)
# Allow execution only if RQ worker process is running
@ -1278,6 +1090,7 @@ class ScriptView(ContentTypePermissionRequiredMixin, View):
return redirect('extras:script_result', job_pk=job.pk)
return render(request, 'extras/script.html', {
'job_count': jobs.count(),
'module': module,
'script': script,
'form': form,
@ -1292,8 +1105,10 @@ class ScriptSourceView(ContentTypePermissionRequiredMixin, View):
def get(self, request, module, name):
module = get_script_module(module, request)
script = module.scripts[name]()
jobs = module.get_jobs(script.class_name)
return render(request, 'extras/script/source.html', {
'job_count': jobs.count(),
'module': module,
'script': script,
'tab': 'source',
@ -1308,13 +1123,7 @@ class ScriptJobsView(ContentTypePermissionRequiredMixin, View):
def get(self, request, module, name):
module = get_script_module(module, request)
script = module.scripts[name]()
object_type = ContentType.objects.get(app_label='extras', model='scriptmodule')
jobs = Job.objects.filter(
object_type=object_type,
object_id=module.pk,
name=script.class_name
)
jobs = module.get_jobs(script.class_name)
jobs_table = JobTable(
data=jobs,
@ -1324,6 +1133,7 @@ class ScriptJobsView(ContentTypePermissionRequiredMixin, View):
jobs_table.configure(request)
return render(request, 'extras/script/jobs.html', {
'job_count': jobs.count(),
'module': module,
'script': script,
'table': jobs_table,
@ -1343,20 +1153,28 @@ class ScriptResultView(ContentTypePermissionRequiredMixin, View):
module = job.object
script = module.scripts[job.name]()
# If this is an HTMX request, return only the result HTML
if request.htmx:
response = render(request, 'extras/htmx/script_result.html', {
context = {
'script': script,
'job': job,
})
}
if job.data and 'log' in job.data:
# Script
context['tests'] = job.data.get('tests', {})
elif job.data:
# Legacy Report
context['tests'] = {
name: data for name, data in job.data.items()
if name.startswith('test_')
}
# If this is an HTMX request, return only the result HTML
if request.htmx:
response = render(request, 'extras/htmx/script_result.html', context)
if job.completed or not job.started:
response.status_code = 286
return response
return render(request, 'extras/script_result.html', {
'script': script,
'job': job,
})
return render(request, 'extras/script_result.html', context)
#

View File

@ -254,7 +254,7 @@ class PrefixBulkEditForm(NetBoxModelBulkEditForm):
mark_utilized = forms.NullBooleanField(
required=False,
widget=BulkEditNullBooleanSelect(),
label=_('Treat as 100% utilized')
label=_('Treat as fully utilized')
)
description = forms.CharField(
label=_('Description'),
@ -298,7 +298,7 @@ class IPRangeBulkEditForm(NetBoxModelBulkEditForm):
mark_utilized = forms.NullBooleanField(
required=False,
widget=BulkEditNullBooleanSelect(),
label=_('Treat as 100% utilized')
label=_('Treat as fully utilized')
)
description = forms.CharField(
label=_('Description'),

View File

@ -240,7 +240,7 @@ class PrefixFilterForm(TenancyFilterForm, NetBoxModelFilterSetForm):
)
mark_utilized = forms.NullBooleanField(
required=False,
label=_('Marked as 100% utilized'),
label=_('Treat as fully utilized'),
widget=forms.Select(
choices=BOOLEAN_WITH_BLANK_CHOICES
)
@ -279,7 +279,7 @@ class IPRangeFilterForm(TenancyFilterForm, NetBoxModelFilterSetForm):
)
mark_utilized = forms.NullBooleanField(
required=False,
label=_('Marked as 100% utilized'),
label=_('Treat as fully utilized'),
widget=forms.Select(
choices=BOOLEAN_WITH_BLANK_CHOICES
)

View File

@ -214,7 +214,7 @@ class PrefixForm(TenancyForm, NetBoxModelForm):
required=False,
selector=True,
query_params={
'site_id': '$site',
'available_at_site': '$site',
},
label=_('VLAN'),
)

View File

@ -268,7 +268,7 @@ class Prefix(GetAvailablePrefixesMixin, PrimaryModel):
mark_utilized = models.BooleanField(
verbose_name=_('mark utilized'),
default=False,
help_text=_("Treat as 100% utilized")
help_text=_("Treat as fully utilized")
)
# Cached depth & child counts
@ -427,10 +427,10 @@ class Prefix(GetAvailablePrefixesMixin, PrimaryModel):
prefix = netaddr.IPSet(self.prefix)
child_ips = netaddr.IPSet([ip.address.ip for ip in self.get_child_ips()])
child_ranges = netaddr.IPSet()
child_ranges = []
for iprange in self.get_child_ranges():
child_ranges.add(iprange.range)
available_ips = prefix - child_ips - child_ranges
child_ranges.append(iprange.range)
available_ips = prefix - child_ips - netaddr.IPSet(child_ranges)
# IPv6 /127's, pool, or IPv4 /31-/32 sets are fully usable
if (self.family == 6 and self.prefix.prefixlen >= 127) or self.is_pool or (self.family == 4 and self.prefix.prefixlen >= 31):
@ -535,7 +535,7 @@ class IPRange(PrimaryModel):
mark_utilized = models.BooleanField(
verbose_name=_('mark utilized'),
default=False,
help_text=_("Treat as 100% utilized")
help_text=_("Treat as fully utilized")
)
clone_fields = (

View File

@ -604,7 +604,7 @@ class PrefixIPAddressesView(generic.ObjectChildrenView):
return parent.get_child_ips().restrict(request.user, 'view').prefetch_related('vrf', 'tenant', 'tenant__group')
def prep_table_data(self, request, queryset, parent):
if not get_table_ordering(request, self.table):
if not request.GET.get('q') and not get_table_ordering(request, self.table):
return add_available_ipaddresses(parent.prefix, queryset, parent.is_pool)
return queryset
@ -1068,6 +1068,12 @@ class FHRPGroupAssignmentEditView(generic.ObjectEditView):
instance.interface = get_object_or_404(content_type.model_class(), pk=request.GET.get('interface_id'))
return instance
def get_extra_addanother_params(self, request):
return {
'interface_type': request.GET.get('interface_type'),
'interface_id': request.GET.get('interface_id'),
}
@register_model_view(FHRPGroupAssignment, 'delete')
class FHRPGroupAssignmentDeleteView(generic.ObjectDeleteView):

View File

@ -39,6 +39,8 @@ REDIS = {
SECRET_KEY = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
DJANGO_ADMIN_ENABLED = True
DEFAULT_PERMISSIONS = {}
LOGGING = {

View File

@ -36,3 +36,7 @@ DEFAULT_ACTION_PERMISSIONS = {
'bulk_edit': {'change'},
'bulk_delete': {'delete'},
}
# General-purpose tokens
CENSOR_TOKEN = '********'
CENSOR_TOKEN_CHANGED = '***CHANGED***'

View File

@ -317,14 +317,8 @@ CUSTOMIZATION_MENU = Menu(
),
),
MenuGroup(
label=_('Reports & Scripts'),
label=_('Scripts'),
items=(
MenuItem(
link='extras:report_list',
link_text=_('Reports'),
permissions=['extras.view_report'],
buttons=get_model_buttons('extras', "reportmodule", actions=['add'])
),
MenuItem(
link='extras:script_list',
link_text=_('Scripts'),
@ -377,19 +371,19 @@ ADMIN_MENU = Menu(
items=(
# Proxy model for auth.User
MenuItem(
link=f'users:netboxuser_list',
link=f'users:user_list',
link_text=_('Users'),
permissions=[f'auth.view_user'],
staff_only=True,
buttons=(
MenuItemButton(
link=f'users:netboxuser_add',
link=f'users:user_add',
title='Add',
icon_class='mdi mdi-plus-thick',
permissions=[f'auth.add_user']
),
MenuItemButton(
link=f'users:netboxuser_import',
link=f'users:user_import',
title='Import',
icon_class='mdi mdi-upload',
permissions=[f'auth.add_user']

View File

@ -29,7 +29,7 @@ from netbox.plugins import PluginConfig
# Environment setup
#
VERSION = '3.7-beta1'
VERSION = '3.7.3-dev'
# Hostname
HOSTNAME = platform.node()
@ -115,6 +115,7 @@ DEFAULT_PERMISSIONS = getattr(configuration, 'DEFAULT_PERMISSIONS', {
'users.delete_token': ({'user': '$user'},),
})
DEVELOPER = getattr(configuration, 'DEVELOPER', False)
DJANGO_ADMIN_ENABLED = getattr(configuration, 'DJANGO_ADMIN_ENABLED', False)
DOCS_ROOT = getattr(configuration, 'DOCS_ROOT', os.path.join(os.path.dirname(BASE_DIR), 'docs'))
EMAIL = getattr(configuration, 'EMAIL', {})
EVENTS_PIPELINE = getattr(configuration, 'EVENTS_PIPELINE', (
@ -123,7 +124,6 @@ EVENTS_PIPELINE = getattr(configuration, 'EVENTS_PIPELINE', (
EXEMPT_VIEW_PERMISSIONS = getattr(configuration, 'EXEMPT_VIEW_PERMISSIONS', [])
FIELD_CHOICES = getattr(configuration, 'FIELD_CHOICES', {})
FILE_UPLOAD_MAX_MEMORY_SIZE = getattr(configuration, 'FILE_UPLOAD_MAX_MEMORY_SIZE', 2621440)
GIT_PATH = getattr(configuration, 'GIT_PATH', 'git')
HTTP_PROXIES = getattr(configuration, 'HTTP_PROXIES', None)
INTERNAL_IPS = getattr(configuration, 'INTERNAL_IPS', ('127.0.0.1', '::1'))
JINJA2_FILTERS = getattr(configuration, 'JINJA2_FILTERS', {})
@ -355,7 +355,6 @@ SERVER_EMAIL = EMAIL.get('FROM_EMAIL')
#
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
@ -393,6 +392,9 @@ INSTALLED_APPS = [
'drf_spectacular_sidecar',
]
if DJANGO_ADMIN_ENABLED:
INSTALLED_APPS.insert(0, 'django.contrib.admin')
# Middleware
MIDDLEWARE = [
'graphiql_debug_toolbar.middleware.DebugToolbarMiddleware',
@ -452,6 +454,8 @@ AUTHENTICATION_BACKENDS = [
'netbox.authentication.ObjectPermissionBackend',
]
AUTH_USER_MODEL = 'users.User'
# Time zones
USE_TZ = True
@ -592,6 +596,8 @@ for param in dir(configuration):
SOCIAL_AUTH_JSONFIELD_ENABLED = True
SOCIAL_AUTH_CLEAN_USERNAME_FUNCTION = 'users.utils.clean_username'
SOCIAL_AUTH_USER_MODEL = AUTH_USER_MODEL
#
# Django Prometheus
#
@ -729,8 +735,10 @@ LANGUAGES = (
('en', _('English')),
('es', _('Spanish')),
('fr', _('French')),
('ja', _('Japanese')),
('pt', _('Portuguese')),
('ru', _('Russian')),
('tr', _('Turkish')),
)
LOCALE_PATHS = (

View File

@ -12,7 +12,7 @@ class Migration(migrations.Migration):
migrations.CreateModel(
name='DummyModel',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False)),
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False)),
('name', models.CharField(max_length=20)),
('number', models.IntegerField(default=100)),
],

View File

@ -4,23 +4,23 @@ from netbox.plugins.navigation import PluginMenu, PluginMenuButton, PluginMenuIt
items = (
PluginMenuItem(
link='plugins:dummy_plugin:dummy_models',
link='plugins:dummy_plugin:dummy_model_list',
link_text='Item 1',
buttons=(
PluginMenuButton(
link='admin:dummy_plugin_dummymodel_add',
title='Add a new dummy model',
link='plugins:dummy_plugin:dummy_model_add',
title='Button 1',
icon_class='mdi mdi-plus-thick',
),
PluginMenuButton(
link='admin:dummy_plugin_dummymodel_add',
title='Add a new dummy model',
link='plugins:dummy_plugin:dummy_model_add',
title='Button 2',
icon_class='mdi mdi-plus-thick',
),
)
),
PluginMenuItem(
link='plugins:dummy_plugin:dummy_models',
link='plugins:dummy_plugin:dummy_model_list',
link_text='Item 2',
),
)

View File

@ -4,5 +4,6 @@ from . import views
urlpatterns = (
path('models/', views.DummyModelsView.as_view(), name='dummy_models'),
path('models/', views.DummyModelsView.as_view(), name='dummy_model_list'),
path('models/add/', views.DummyModelAddView.as_view(), name='dummy_model_add'),
)

View File

@ -1,3 +1,6 @@
import random
import string
from django.http import HttpResponse
from django.views.generic import View
@ -15,6 +18,20 @@ class DummyModelsView(View):
return HttpResponse(f"Instances: {instance_count}")
class DummyModelAddView(View):
def get(self, request):
return HttpResponse(f"Create an instance")
def post(self, request):
instance = DummyModel(
name=''.join(random.choices(string.ascii_lowercase, k=8)),
number=random.randint(1, 100000)
)
instance.save()
return HttpResponse(f"Instance created")
@register_model_view(Site, 'extra', path='other-stuff')
class ExtraCoreModelView(View):

View File

@ -41,7 +41,7 @@ class PluginTest(TestCase):
def test_views(self):
# Test URL resolution
url = reverse('plugins:dummy_plugin:dummy_models')
url = reverse('plugins:dummy_plugin:dummy_model_list')
self.assertEqual(url, '/plugins/dummy-plugin/models/')
# Test GET request

View File

@ -11,7 +11,6 @@ from netbox.graphql.schema import schema
from netbox.graphql.views import GraphQLView
from netbox.plugins.urls import plugin_patterns, plugin_api_patterns
from netbox.views import HomeView, StaticMediaFailureView, SearchView, htmx
from .admin import admin_site
_patterns = [
@ -70,26 +69,25 @@ _patterns = [
# Plugins
path('plugins/', include((plugin_patterns, 'plugins'))),
path('api/plugins/', include((plugin_api_patterns, 'plugins-api'))),
# Admin
path('admin/', admin_site.urls),
]
# Django admin UI
if settings.DJANGO_ADMIN_ENABLED:
from .admin import admin_site
_patterns.append(path('admin/', admin_site.urls))
# django-debug-toolbar
if settings.DEBUG:
import debug_toolbar
_patterns += [
path('__debug__/', include(debug_toolbar.urls)),
]
_patterns.append(path('__debug__/', include(debug_toolbar.urls)))
# Prometheus metrics
if settings.METRICS_ENABLED:
_patterns += [
path('', include('django_prometheus.urls')),
]
_patterns.append(path('', include('django_prometheus.urls')))
# Prepend BASE_PATH
urlpatterns = [
path('{}'.format(settings.BASE_PATH), include(_patterns))
path(settings.BASE_PATH, include(_patterns))
]
handler404 = 'netbox.views.errors.handler_404'

View File

@ -2,14 +2,17 @@ import re
from collections import namedtuple
from django.conf import settings
from django.contrib import messages
from django.contrib.contenttypes.models import ContentType
from django.core.cache import cache
from django.shortcuts import redirect, render
from django.utils.translation import gettext_lazy as _
from django.views.generic import View
from django_tables2 import RequestConfig
from packaging import version
from extras.dashboard.utils import get_dashboard
from extras.constants import DEFAULT_DASHBOARD
from extras.dashboard.utils import get_dashboard, get_default_dashboard
from netbox.forms import SearchForm
from netbox.search import LookupTypes
from netbox.search.backends import search_backend
@ -32,7 +35,13 @@ class HomeView(View):
return redirect('login')
# Construct the user's custom dashboard layout
try:
dashboard = get_dashboard(request.user).get_layout()
except Exception:
messages.error(request, _(
"There was an error loading the dashboard configuration. A default dashboard is in use."
))
dashboard = get_default_dashboard(config=DEFAULT_DASHBOARD).get_layout()
# Check whether a new release is available. (Only for staff/superusers.)
new_release = None

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -25,6 +25,7 @@
"query-string": "^7.1.1",
"sass": "^1.55.0",
"slim-select": "^1.27.1",
"tom-select": "^2.3.1",
"typeface-inter": "^3.18.1",
"typeface-roboto-mono": "^1.1.13"
},
@ -225,6 +226,19 @@
"node": ">= 8"
}
},
"node_modules/@orchidjs/sifter": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/@orchidjs/sifter/-/sifter-1.0.3.tgz",
"integrity": "sha512-zCZbwKegHytfsPm8Amcfh7v/4vHqTAaOu6xFswBYcn8nznBOuseu6COB2ON7ez0tFV0mKL0nRNnCiZZA+lU9/g==",
"dependencies": {
"@orchidjs/unicode-variants": "^1.0.4"
}
},
"node_modules/@orchidjs/unicode-variants": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/@orchidjs/unicode-variants/-/unicode-variants-1.0.4.tgz",
"integrity": "sha512-NvVBRnZNE+dugiXERFsET1JlKZfM5lJDEpSMilKW4bToYJ7pxf0Zne78xyXB2ny2c2aHfJ6WLnz1AaTNHAmQeQ=="
},
"node_modules/@pkgr/utils": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/@pkgr/utils/-/utils-2.3.1.tgz",
@ -3888,6 +3902,22 @@
"integrity": "sha1-bkWxJj8gF/oKzH2J14sVuL932jI=",
"license": "MIT"
},
"node_modules/tom-select": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/tom-select/-/tom-select-2.3.1.tgz",
"integrity": "sha512-QS4vnOcB6StNGqX4sGboGXL2fkhBF2gIBB+8Hwv30FZXYPn0CyYO8kkdATRvwfCTThxiR4WcXwKJZ3cOmtI9eg==",
"dependencies": {
"@orchidjs/sifter": "^1.0.3",
"@orchidjs/unicode-variants": "^1.0.4"
},
"engines": {
"node": "*"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/tom-select"
}
},
"node_modules/tsconfig-paths": {
"version": "3.14.1",
"resolved": "https://registry.npmjs.org/tsconfig-paths/-/tsconfig-paths-3.14.1.tgz",

Some files were not shown because too many files have changed in this diff Show More