Merge branch 'feature' into 12510-reports

This commit is contained in:
Jeremy Stretch 2024-02-05 14:48:00 -05:00
commit a6f5e7260b
210 changed files with 32250 additions and 3358 deletions

View File

@ -23,7 +23,7 @@ body:
attributes:
label: NetBox Version
description: What version of NetBox are you currently running?
placeholder: v3.6.9
placeholder: v3.7.2
validations:
required: true
- type: dropdown

View File

@ -7,6 +7,9 @@ contact_links:
- name: ❓ Discussion
url: https://github.com/netbox-community/netbox/discussions
about: "If you're just looking for help, try starting a discussion instead."
- name: 🌎 Correct a Translation
url: https://explore.transifex.com/netbox-community/netbox/
about: "Spot an incorrect translation? You can propose a fix on Transifex."
- name: 💡 Plugin Idea
url: https://plugin-ideas.netbox.dev
about: "Have an idea for a plugin? Head over to the ideas board!"

View File

@ -14,7 +14,7 @@ body:
attributes:
label: NetBox version
description: What version of NetBox are you currently running?
placeholder: v3.6.9
placeholder: v3.7.2
validations:
required: true
- type: dropdown

View File

@ -68,6 +68,9 @@ jobs:
- name: Collect static files
run: python netbox/manage.py collectstatic --no-input
- name: Check for missing migrations
run: python netbox/manage.py makemigrations --check
- name: Check PEP8 compliance
run: pycodestyle --ignore=W504,E501 --exclude=node_modules netbox/

View File

@ -9,13 +9,15 @@ on:
permissions:
issues: write
pull-requests: write
discussions: write
jobs:
lock:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v4
- uses: dessant/lock-threads@v5
with:
issue-inactive-days: 90
pr-inactive-days: 30
discussion-inactive-days: 180
issue-lock-reason: 'resolved'

View File

@ -86,12 +86,16 @@ intake policy](https://github.com/netbox-community/netbox/wiki/Issue-Intake-Poli
* In most cases, it is not necessary to add a changelog entry: A maintainer will take care of this when the PR is merged. (This helps avoid merge conflicts resulting from multiple PRs being submitted simultaneously.)
* All code submissions should meet the following criteria (CI will enforce these checks):
* All code submissions must meet the following criteria (CI will enforce these checks where feasible):
* Consist entirely of original work
* Python syntax is valid
* All tests pass when run with `./manage.py test`
* PEP 8 compliance is enforced, with the exception that lines may be
greater than 80 characters in length
> [!CAUTION]
> Any contributions which include AI-generated or reproduced content will be rejected.
* Some other tips to keep in mind:
* If you'd like to volunteer for someone else's issue, please post a comment on that issue letting us know. (This will allow the maintainers to assign it to you.)
* Check out our [developer docs](https://docs.netbox.dev/en/stable/development/getting-started/) for tips on setting up your development environment.
@ -117,8 +121,6 @@ We're always looking for motivated individuals to join the maintainers team and
We generally ask that maintainers dedicate around four hours of work to the project each week on average, which includes both hands-on development and project management tasks such as issue triage. Maintainers are also encouraged (but not required) to attend our bi-weekly Zoom call to catch up on recent items.
Many maintainers petition their employer to grant some of their paid time to work on NetBox. In doing so, your employer becomes eligible to be featured as a [NetBox sponsor](https://github.com/netbox-community/netbox/wiki/Sponsorship).
Interested? You can contact our lead maintainer, Jeremy Stretch, at jeremy@netbox.dev or on the [NetDev Community Slack](https://netdev.chat/). We'd love to have you on the team!
## :heart: Other Ways to Contribute

153
README.md
View File

@ -1,86 +1,129 @@
<div align="center">
<img src="https://raw.githubusercontent.com/netbox-community/netbox/develop/docs/netbox_logo.svg" width="400" alt="NetBox logo" />
<p>The premier source of truth powering network automation</p>
<img src="https://github.com/netbox-community/netbox/workflows/CI/badge.svg?branch=master" alt="CI status" />
<p><strong>The cornerstone of every automated network</strong></p>
<a href="https://github.com/netbox-community/netbox/releases"><img src="https://img.shields.io/github/v/release/netbox-community/netbox" alt="Latest release" /></a>
<a href="https://github.com/netbox-community/netbox/blob/master/LICENSE.txt"><img src="https://img.shields.io/badge/license-Apache_2.0-blue.svg" alt="License" /></a>
<a href="https://github.com/netbox-community/netbox/graphs/contributors"><img src="https://img.shields.io/github/contributors/netbox-community/netbox?color=blue" alt="Contributors" /></a>
<a href="https://github.com/netbox-community/netbox/stargazers"><img src="https://img.shields.io/github/stars/netbox-community/netbox?style=flat" alt="GitHub stars" /></a>
<a href="https://explore.transifex.com/netbox-community/netbox/"><img src="https://img.shields.io/badge/languages-6-blue" alt="Languages supported" /></a>
<a href="https://github.com/netbox-community/netbox/actions/workflows/ci.yml"><img src="https://github.com/netbox-community/netbox/workflows/CI/badge.svg?branch=master" alt="CI status" /></a>
<p></p>
</div>
NetBox is the leading solution for modeling and documenting modern networks. By
combining the traditional disciplines of IP address management (IPAM) and
datacenter infrastructure management (DCIM) with powerful APIs and extensions,
NetBox provides the ideal "source of truth" to power network automation.
Available as open source software under the Apache 2.0 license, NetBox serves
as the cornerstone for network automation in thousands of organizations.
NetBox exists to empower network engineers. Since its release in 2016, it has become the go-to solution for modeling and documenting network infrastructure for thousands of organizations worldwide. As a successor to legacy IPAM and DCIM applications, NetBox provides a cohesive, extensive, and accessible data model for all things networked. By providing a single robust user interface and programmable APIs for everything from cable maps to device configurations, NetBox serves as the central source of truth for the modern network.
* **Physical infrastructure:** Accurately model the physical world, from global regions down to individual racks of gear. Then connect everything - network, console, and power!
* **Modern IPAM:** All the standard IPAM functionality you expect, plus VRF import/export tracking, VLAN management, and overlay support.
* **Data circuits:** Confidently manage the delivery of critical circuits from various service providers, modeled seamlessly alongside your own infrastructure.
* **Power tracking:** Map the distribution of power from upstream sources to individual feeds and outlets.
* **Organization:** Manage tenant and contact assignments natively.
* **Powerful search:** Easily find anything you need using a single global search function.
* **Comprehensive logging:** Leverage both automatic change logging and user-submitted journal entries to track your network's growth over time.
* **Endless customization:** Custom fields, custom links, tags, export templates, custom validation, reports, scripts, and more!
* **Flexible permissions:** An advanced permissions systems enables very flexible delegation of permissions.
* **Integrations:** Easily connect NetBox to your other tooling via its REST & GraphQL APIs.
* **Plugins:** Not finding what you need in the core application? Try one of many community plugins - or build your own!
<p align="center">
<a href="#netboxs-role">NetBox's Role</a> |
<a href="#why-netbox">Why NetBox?</a> |
<a href="#getting-started">Getting Started</a> |
<a href="#get-involved">Get Involved</a> |
<a href="#project-stats">Project Stats</a> |
<a href="#screenshots">Screenshots</a>
</p>
![Screenshot of NetBox UI](docs/media/screenshots/netbox-ui.png "NetBox UI")
<p align="center">
<img src="docs/media/screenshots/home-light.png" width="600" alt="NetBox user interface screenshot" />
</p>
## NetBox's Role
NetBox functions as the **source of truth** for your network infrastructure. Its job is to define and validate the _intended state_ of all network components and resources. NetBox does not interact with network nodes directly; rather, it makes this data available programmatically to purpose-built automation, monitoring, and assurance tools. This separation of duties enables the construction of a robust yet flexible automation system.
<p align="center">
<img src="docs/media/misc/reference_architecture.png" alt="Reference network automation architecture" />
</p>
The diagram above illustrates the recommended deployment architecture for an automated network, leveraging NetBox as the central authority for network state. This approach allows your team to swap out individual tools to meet changing needs while retaining a predictable, modular workflow.
## Why NetBox?
### Comprehensive Data Model
Racks, devices, cables, IP addresses, VLANs, circuits, power, VPNs, and lots more: NetBox is built for networks. Its comprehensive and thoroughly inter-linked data model provides for natural and highly structured modeling of myriad network primitives that just isn't possible using general-purpose tools. And there's no need to waste time contemplating how to build out a database: Everything is ready to go upon installation.
### Focused Development
NetBox strives to meet a singular goal: Provide the best available solution for making network infrastructure programmatically accessible. Unlike "all-in-one" tools which awkwardly bolt on half-baked features in an attempt to check every box, NetBox is committed to its core function. NetBox provides the best possible solution for modeling network infrastructure, and provides rich APIs for integrating with tools that excel in other areas of network automation.
### Extensible and Customizable
No two networks are exactly the same. Users are empowered to extend NetBox's native data model with custom fields and tags to best suit their unique needs. You can even write your own plugins to introduce entirely new objects and functionality!
### Flexible Permissions
NetBox includes a fully customizable permission system, which affords administrators incredible granularity when assigning roles to users and groups. Want to restrict certain users to working only with cabling and not be able to change IP addresses? Or maybe each team should have access only to a particular tenant? NetBox enables you to craft roles as you see fit.
### Custom Validation & Protection Rules
The data you put into NetBox is crucial to network operations. In addition to its robust native validation rules, NetBox provides mechanisms for administrators to define their own custom validation rules for objects. Custom validation can be used both to ensure new or modified objects adhere to a set of rules, and to prevent the deletion of objects which don't meet certain criteria. (For example, you might want to prevent the deletion of a device with an "active" status.)
### Device Configuration Rendering
NetBox can render user-created Jinja2 templates to generate device configurations from its own data. Configuration templates can be uploaded individually or pulled automatically from an external source, such as a git repository. Rendered configurations can be retrieved via the REST API for application directly to network devices via a provisioning tool such as Ansible or Salt.
### Custom Scripts
Complex workflows, such as provisioning a new branch office, can be tedious to carry out via the user interface. NetBox allows you to write and upload custom scripts that can be run directly from the UI. Scripts prompt users for input and then automate the necessary tasks to greatly simplify otherwise burdensome processes.
### Automated Events
Users can define event rules to automatically trigger a custom script or outbound webhook in response to a NetBox event. For example, you might want to automatically update a network monitoring service whenever a new device is added to NetBox, or update a DHCP server when an IP range is allocated.
### Comprehensive Change Logging
NetBox automatically logs the creation, modification, and deletion of all managed objects, providing a thorough change history. Changes can be attributed to the executing user, and related changes are grouped automatically by request ID.
> [!NOTE]
> A complete list of NetBox's myriad features can be found in [the introductory documentation](https://docs.netbox.dev/en/stable/introduction/).
## Getting Started
<div align="center">
[![NetBox logo](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/deploy/deploy1.png)](https://github.com/netbox-community/netbox)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![Docker logo](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/deploy/deploy2.png)](https://github.com/netbox-community/netbox-docker)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![NetBox Labs logo](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/deploy/deploy3.png)](https://netboxlabs.com/netbox-cloud/)
</div>
* Just want to explore? Check out [our public demo](https://demo.netbox.dev/) right now!
* The [official documentation](https://docs.netbox.dev) offers a comprehensive introduction.
* Check out [our wiki](https://github.com/netbox-community/netbox/wiki/Community-Contributions) for even more projects to get the most out of NetBox!
<p align="center">
<a href="https://netboxlabs.com/netbox-cloud/"><img src="docs/media/misc/netbox_cloud.png" alt="NetBox Cloud" /></a><br />
Looking for an enterprise solution? Check out <strong><a href="https://netboxlabs.com/netbox-cloud/">NetBox Cloud</a></strong>!
</p>
## Get Involved
* Follow [@NetBoxOfficial](https://twitter.com/NetBoxOfficial) on Twitter!
* Join the conversation on [the discussion forum](https://github.com/netbox-community/netbox/discussions) and [Slack](https://netdev.chat/)!
* Already a power user? You can [suggest a feature](https://github.com/netbox-community/netbox/issues/new?assignees=&labels=type%3A+feature&template=feature_request.yaml) or [report a bug](https://github.com/netbox-community/netbox/issues/new?assignees=&labels=type%3A+bug&template=bug_report.yaml) on GitHub.
* Contributions from the community are encouraged and appreciated! Check out our [contributing guide](CONTRIBUTING.md) to get started.
* [Share your idea](https://plugin-ideas.netbox.dev/) for a new plugin, or [learn how to build one](https://github.com/netbox-community/netbox-plugin-tutorial) yourself!
## Project Stats
<div align="center">
<p align="center">
<a href="https://github.com/netbox-community/netbox/commits"><img src="https://images.repography.com/29023055/netbox-community/netbox/recent-activity/whQtEr_TGD9PhW1BPlhlEQ5jnrgQ0KJpm-LlGtpoGO0/3Kx_iWUSBRJ5-AI4QwJEJWrUDEz3KrX2lvh8aYE0WXY_timeline.svg" alt="Timeline graph"></a>
<a href="https://github.com/netbox-community/netbox/issues"><img src="https://images.repography.com/29023055/netbox-community/netbox/recent-activity/whQtEr_TGD9PhW1BPlhlEQ5jnrgQ0KJpm-LlGtpoGO0/3Kx_iWUSBRJ5-AI4QwJEJWrUDEz3KrX2lvh8aYE0WXY_issues.svg" alt="Issues graph"></a>
<a href="https://github.com/netbox-community/netbox/pulls"><img src="https://images.repography.com/29023055/netbox-community/netbox/recent-activity/whQtEr_TGD9PhW1BPlhlEQ5jnrgQ0KJpm-LlGtpoGO0/3Kx_iWUSBRJ5-AI4QwJEJWrUDEz3KrX2lvh8aYE0WXY_prs.svg" alt="Pull requests graph"></a>
<a href="https://github.com/netbox-community/netbox/graphs/contributors"><img src="https://images.repography.com/29023055/netbox-community/netbox/recent-activity/whQtEr_TGD9PhW1BPlhlEQ5jnrgQ0KJpm-LlGtpoGO0/3Kx_iWUSBRJ5-AI4QwJEJWrUDEz3KrX2lvh8aYE0WXY_users.svg" alt="Top contributors"></a>
<br />Stats via <a href="https://repography.com">Repography</a>
</div>
## Sponsors
<div align="center">
[![NetBox Labs](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/netbox_labs.png)](https://netboxlabs.com)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![DigitalOcean](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/digitalocean.png)](https://try.digitalocean.com/developer-cloud)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![Sentry](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/sentry.png)](https://sentry.io)
<br />
[![Equinix Metal](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/equinix.png)](https://metal.equinix.com)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![OneMind Services](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/onemind_services.png)](https://onemindservices.com)
</div>
</p>
## Screenshots
![Screenshot of main page (dark mode)](docs/media/screenshots/home-dark.png "Main page (dark mode)")
![Screenshot of rack elevation](docs/media/screenshots/rack.png "Rack elevation")
![Screenshot of prefixes hierarchy](docs/media/screenshots/prefixes-list.png "Prefixes hierarchy")
![Screenshot of cable trace](docs/media/screenshots/cable-trace.png "Cable tracing")
<p align="center">
<strong>NetBox Dashboard (Light Mode)</strong><br />
<img src="docs/media/screenshots/home-light.png" width="600" alt="NetBox dashboard (light mode)" />
</p>
<p align="center">
<strong>NetBox Dashboard (Dark Mode)</strong><br />
<img src="docs/media/screenshots/home-dark.png" width="600" alt="NetBox dashboard (dark mode)" />
</p>
<p align="center">
<strong>Prefixes List</strong><br />
<img src="docs/media/screenshots/prefixes-list.png" width="600" alt="Prefixes list" />
</p>
<p align="center">
<strong>Rack View</strong><br />
<img src="docs/media/screenshots/rack.png" width="600" alt="Rack view" />
</p>
<p align="center">
<strong>Cable Trace</strong><br />
<img src="docs/media/screenshots/cable-trace.png" width="600" alt="Cable trace" />
</p>

View File

@ -73,7 +73,7 @@ You should be redirected to Microsoft's authentication portal. Enter the usernam
If successful, you will be redirected back to the NetBox UI, and will be logged in as the AD user. You can verify this by navigating to your profile (using the button at top right).
This user account has been replicated locally to NetBox, and can now be assigned groups and permissions within the NetBox admin UI.
This user account has been replicated locally to NetBox, and can now be assigned groups and permissions.
## Troubleshooting

View File

@ -67,4 +67,4 @@ You should be redirected to Okta's authentication portal. Enter the username/ema
If successful, you will be redirected back to the NetBox UI, and will be logged in as the Okta user. You can verify this by navigating to your profile (using the button at top right).
This user account has been replicated locally to NetBox, and can now be assigned groups and permissions within the NetBox admin UI.
This user account has been replicated locally to NetBox, and can now be assigned groups and permissions.

View File

@ -2,9 +2,9 @@
## Local Authentication
Local user accounts and groups can be created in NetBox under the "Authentication and Authorization" section of the administrative user interface. This interface is available only to users with the "staff" permission enabled.
Local user accounts and groups can be created in NetBox under the "Authentication" section in the "Admin" menu. This section is available only to users with the "staff" permission enabled.
At a minimum, each user account must have a username and password set. User accounts may also denote a first name, last name, and email address. [Permissions](../permissions.md) may also be assigned to users and/or groups within the admin UI.
At a minimum, each user account must have a username and password set. User accounts may also denote a first name, last name, and email address. [Permissions](../permissions.md) may also be assigned to individual users and/or groups as needed.
## Remote Authentication

View File

@ -10,6 +10,9 @@ The time zone NetBox will use when dealing with dates and times. It is recommend
You may define custom formatting for date and times. For detailed instructions on writing format strings, please see [the Django documentation](https://docs.djangoproject.com/en/stable/ref/templates/builtins/#date). Default formats are listed below.
!!! note
These system defaults will be overridden by a user's selected language/locale when [localization](./system.md#enable_localization) is enabled.
```python
DATE_FORMAT = 'N j, Y' # June 26, 2016
SHORT_DATE_FORMAT = 'Y-m-d' # 2016-06-26

View File

@ -46,4 +46,4 @@ The configuration file may be modified at any time. However, the WSGI service (e
$ sudo systemctl restart netbox
```
Configuration parameters which are set via the admin UI (those listed under "dynamic settings") take effect immediately.
Dynamic configuration parameters (those which can be modified via the UI) take effect immediately.

View File

@ -99,6 +99,14 @@ The maximum size (in bytes) of an incoming HTTP request (i.e. `GET` or `POST` da
---
## DJANGO_ADMIN_ENABLED
Default: False
Setting this to True installs the `django.contrib.admin` app and enables the [Django admin UI](https://docs.djangoproject.com/en/5.0/ref/contrib/admin/). This may be necessary to support older plugins which do not integrate with the native NetBox interface.
---
## ENFORCE_GLOBAL_UNIQUE
!!! tip "Dynamic Configuration Parameter"

View File

@ -69,15 +69,7 @@ Email is sent from NetBox only for critical events or if configured for [logging
Default: False
Determines if localization features are enabled or not. This should only be enabled for development or testing purposes as netbox is not yet fully localized. Turning this on will localize numeric and date formats (overriding what is set for DATE_FORMAT) based on the browser locale as well as translate certain strings from third party modules.
---
## GIT_PATH
Default: `git`
The system path to the `git` executable, used by the synchronization backend for remote git repositories.
Determines if localization features are enabled or not. This should only be enabled for development or testing purposes as netbox is not yet fully localized. Turning this on will localize numeric and date formats (overriding any configured [system defaults](./date-time.md#date-and-time-formatting)) based on the browser locale as well as translate certain strings from third party modules.
---

View File

@ -296,9 +296,9 @@ An IPv4 or IPv6 network with a mask. Returns a `netaddr.IPNetwork` object. Two a
## Running Custom Scripts
!!! note
To run a custom script, a user must be assigned via permissions for `Extras > Script`, `Extras > ScriptModule`, and `Core > ManagedFile` objects. They must also be assigned the `extras.run_script` permission. This is achieved by assigning the user (or group) a permission on the Script object and specifying the `run` action in the admin UI as shown below.
To run a custom script, a user must be assigned permissions for `Extras > Script`, `Extras > Script Module`, and `Core > Managed File` objects. They must also be assigned the `extras.run_script` permission. This is achieved by assigning the user (or group) a permission on the Script object and specifying the `run` action in "Permissions" as shown below.
![Adding the run action to a permission](../media/admin_ui_run_permission.png)
![Adding the run action to a permission](../media/run_permission.png)
### Via the Web UI

View File

@ -80,6 +80,18 @@ Run the following command to update the device type definition validation schema
This will automatically update the schema file at `contrib/generated_schema.json`.
### Update & Compile Translations
Log into [Transifex](https://app.transifex.com/netbox-community/netbox/dashboard/) to download the updated string maps. Download the resource (portable object, or `.po`) file for each language and save them to `netbox/translations/$lang/LC_MESSAGES/django.po`, overwriting the current files. (Be sure to click the **Download for use** link.)
![Transifex download](../media/development/transifex_download.png)
Once the resource files for all languages have been updated, compile the machine object (`.mo`) files using the `compilemessages` management command:
```nohighlight
./manage.py compilemessages
```
### Update Version and Changelog
* Update the `VERSION` constant in `settings.py` to the new release version.
@ -90,7 +102,7 @@ Commit these changes to the `develop` branch and push upstream.
### Verify CI Build Status
Ensure that continuous integration testing on the `develop` branch is completing successfully. If it fails, take action to correct the failure before proceding with the release.
Ensure that continuous integration testing on the `develop` branch is completing successfully. If it fails, take action to correct the failure before proceeding with the release.
### Submit a Pull Request

View File

@ -0,0 +1,30 @@
# Translations
NetBox coordinates all translation work using the [Transifex](https://explore.transifex.com/netbox-community/netbox/) platform. Signing up for a Transifex account is free.
All language translations in NetBox are generated from the source file found at `netbox/translations/en/LC_MESSAGES/django.po`. This file contains the original English strings with empty mappings, and is generated as part of NetBox's release process. Transifex updates source strings from this file on a recurring basis, so new translation strings will appear in the platform automatically as it is updated in the code base.
Reviewers log into Transifex and navigate to their designated language(s) to translate strings. The initial translation for most strings will be machine-generated via the AWS Translate service. Human reviewers are responsible for reviewing these translations and making corrections where necessary.
Immediately prior to each NetBox release, the translation maps for all completed languages will be downloaded from Transifex, compiled, and checked into the NetBox code base by a maintainer.
## Updating Translation Sources
To update the English `.po` file from which all translations are derived, use the `makemessages` management command:
```nohighlight
./manage.py makemessages -l en
```
Then, commit the change and push to the `develop` branch on GitHub. After some time, any new strings will appear for translation on Transifex automatically.
## Proposing New Languages
If you'd like to add support for a new language to NetBox, the first step is to [submit a GitHub issue](https://github.com/netbox-community/netbox/issues/new?assignees=&labels=type%3A+translation&projects=&template=translation.yaml) to capture the proposal. While we'd like to add as many languages as possible, we do need to limit the rate at which new languages are added. New languages will be selected according to community interest and the number of volunteers who sign up as translators.
Once a proposed language has been approved, a NetBox maintainer will:
* Add it to the Transifex platform
* Designate one or more reviewers
* Create the initial machine-generated translations for review
* Add it to the list of supported languages

View File

@ -39,7 +39,7 @@ When rendered for a specific NetBox device, the template's `device` variable wil
### Context Data
The objet for which the configuration is being rendered is made available as template context as `device` or `virtualmachine` for devices and virtual machines, respectively. Additionally, NetBox model classes can be accessed by the app or plugin in which they reside. For example:
The object for which the configuration is being rendered is made available as template context as `device` or `virtualmachine` for devices and virtual machines, respectively. Additionally, NetBox model classes can be accessed by the app or plugin in which they reside. For example:
```
There are {{ dcim.Site.objects.count() }} sites.
@ -70,6 +70,11 @@ This request will trigger resolution of the device's preferred config template i
If no config template has been assigned to any of these three objects, the request will fail.
The configuration can be rendered as JSON or as plaintext by setting the `Accept:` HTTP header. For example:
* `Accept: application/json`
* `Accept: text/plain`
### General Purpose Use
NetBox config templates can also be rendered without being tied to any specific device, using a separate general purpose REST API endpoint. Any data included with a POST request to this endpoint will be passed as context data for the template.

View File

@ -28,4 +28,4 @@ For more detail, see the reference documentation for NetBox's [conditional logic
## Event Rule Processing
When a change is detected, any resulting events are placed into a Redis queue for processing. This allows the user's request to complete without needing to wait for the outgoing event(s) to be processed. The events are then extracted from the queue by the `rqworker` process. The current event queue and any failed events can be inspected in the admin UI under System > Background Tasks.
When a change is detected, any resulting events are placed into a Redis queue for processing. This allows the user's request to complete without needing to wait for the outgoing event(s) to be processed. The events are then extracted from the queue by the `rqworker` process. The current event queue and any failed events can be inspected under System > Background Tasks.

View File

@ -1,6 +1,6 @@
# Synchronized Data
Several models in NetBox support the automatic synchronization of local data from a designated remote source. For example, [configuration templates](./configuration-rendering.md) defined in NetBox can source their content from text files stored in a remote git repository. This accomplished using the core [data source](../models/core/datasource.md) and [data file](../models/core/datafile.md) models.
Several models in NetBox support the automatic synchronization of local data from a designated remote source. For example, [configuration templates](./configuration-rendering.md) defined in NetBox can source their content from text files stored in a remote git repository. This is accomplished using the core [data source](../models/core/datasource.md) and [data file](../models/core/datafile.md) models.
To enable remote data synchronization, the NetBox administrator first designates one or more remote data sources. NetBox currently supports the following source types:

View File

@ -4,7 +4,7 @@
NetBox is the leading solution for modeling and documenting modern networks. By combining the traditional disciplines of IP address management (IPAM) and datacenter infrastructure management (DCIM) with powerful APIs and extensions, NetBox provides the ideal "source of truth" to power network automation. Read on to discover why thousands of organizations worldwide put NetBox at the heart of their infrastructure.
[![NetBox UI](./media/screenshots/netbox-ui.png)](./media/screenshots/netbox-ui.png)
[![NetBox UI](./media/screenshots/home-light.png)](./media/screenshots/home-light.png)
## :material-server-network: Built for Networks

View File

@ -58,3 +58,6 @@ You should see output similar to the following:
If the NetBox service fails to start, issue the command `journalctl -eu netbox` to check for log messages that may indicate the problem.
Once you've verified that the WSGI workers are up and running, move on to HTTP server setup.
!!! note
There is a bug in the current stable release of gunicorn (v21.2.0) where automatic restarts of the worker processes can result in 502 errors under heavy load. (See [gunicorn bug #3038](https://github.com/benoitc/gunicorn/issues/3038) for more detail.) Users who encounter this issue may opt to downgrade to an earlier, unaffected release of gunicorn (`pip install gunicorn==20.1.0`). Note, however, that this earlier release does not officially support Python 3.11.

View File

@ -73,9 +73,9 @@ If no body template is specified, the request body will be populated with a JSON
## Webhook Processing
Using [Event Rules](../features/event-rules.md), when a change is detected, any resulting webhooks are placed into a Redis queue for processing. This allows the user's request to complete without needing to wait for the outgoing webhook(s) to be processed. The webhooks are then extracted from the queue by the `rqworker` process and HTTP requests are sent to their respective destinations. The current webhook queue and any failed webhooks can be inspected in the admin UI under System > Background Tasks.
Using [Event Rules](../features/event-rules.md), when a change is detected, any resulting webhooks are placed into a Redis queue for processing. This allows the user's request to complete without needing to wait for the outgoing webhook(s) to be processed. The webhooks are then extracted from the queue by the `rqworker` process and HTTP requests are sent to their respective destinations. The current webhook queue and any failed webhooks can be inspected under System > Background Tasks.
A request is considered successful if the response has a 2XX status code; otherwise, the request is marked as having failed. Failed requests may be retried manually via the admin UI.
A request is considered successful if the response has a 2XX status code; otherwise, the request is marked as having failed. Failed requests may be requeued manually under System > Background Tasks.
## Troubleshooting
@ -106,6 +106,6 @@ Content-Type: application/x-www-form-urlencoded
------------
```
Note that `webhook_receiver` does not actually _do_ anything with the information received: It merely prints the request headers and body for inspection.
Note that `webhook_receiver` does not actually _do_ anything with the information received: It merely prints the request headers and body for inspection. If you don't see any output, check that the `rqworker` process is running and that webhook events are being placed into the queue.
Now, when the NetBox webhook is triggered and processed, you should see its headers and content appear in the terminal where the webhook receiver is listening. If you don't, check that the `rqworker` process is running and that webhook events are being placed into the queue (visible under the NetBox admin UI).
Webhook results can be found in the NetBox admin UI under the Background Tasks section. You can see any finished or failed runs, as well as the error log for failed webhooks.

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

After

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 173 KiB

After

Width:  |  Height:  |  Size: 316 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 309 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

After

Width:  |  Height:  |  Size: 356 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

After

Width:  |  Height:  |  Size: 235 KiB

View File

@ -14,7 +14,7 @@ The IKE version employed (v1 or v2).
### Mode
The IKE mode employed (main or aggressive).
The mode employed (main or aggressive) when IKEv1 is in use. This setting is not supported for IKEv2.
### Proposals

View File

@ -47,3 +47,14 @@ class ReminderWidget(DashboardWidget):
def render(self, request):
return self.config.get('content')
```
## Initialization
To register the widget, it becomes essential to import the widget module. The recommended approach is to accomplish this within the `ready` method situated in your `PluginConfig`:
```python
class FooBarConfig(PluginConfig):
def ready(self):
super().ready()
from . import widgets # point this to the above widget module you created
```

View File

@ -20,4 +20,4 @@ backends = [MyDataBackend]
!!! tip
The path to the list of search indexes can be modified by setting `data_backends` in the PluginConfig instance.
::: core.data_backends.DataBackend
::: netbox.data_backends.DataBackend

View File

@ -1,5 +1,60 @@
# NetBox v3.7
## v3.7.3 (FUTURE)
---
## v3.7.2 (2024-02-05)
### Enhancements
* [#13729](https://github.com/netbox-community/netbox/issues/13729) - Omit sensitive data source parameters from change log data
* [#14645](https://github.com/netbox-community/netbox/issues/14645) - Limit the number of assigned IP addresses displayed under interfaces list
### Bug Fixes
* [#14500](https://github.com/netbox-community/netbox/issues/14500) - Optimize calculation of available child prefixes & ranges when viewing a prefix
* [#14511](https://github.com/netbox-community/netbox/issues/14511) - Fix GraphQL support for interfaces connected to provider networks
* [#14572](https://github.com/netbox-community/netbox/issues/14572) - Correct the number of jobs listed for individual report & script modules
* [#14703](https://github.com/netbox-community/netbox/issues/14703) - Revert to the default layout when encountering a misconfigured dashboard
* [#14755](https://github.com/netbox-community/netbox/issues/14755) - Fix validation of choice values & labels when creating a custom field choice set via the REST API
* [#14838](https://github.com/netbox-community/netbox/issues/14838) - Avoid corrupting JSON data when changing the action type while editing an event rule
* [#14839](https://github.com/netbox-community/netbox/issues/14839) - Fix form validation error when attempting to terminate a tunnel to a virtual machine interface
* [#14840](https://github.com/netbox-community/netbox/issues/14840) - Fix `NoReverseMatch` exception when rendering a custom field which references a user
* [#14847](https://github.com/netbox-community/netbox/issues/14847) - IKE policy mode may be set inly when IKEv1 is selected
* [#14851](https://github.com/netbox-community/netbox/issues/14851) - Automatically remove any associated bookmarks when deleting a user
* [#14879](https://github.com/netbox-community/netbox/issues/14879) - Include custom fields in REST API representation of data sources
* [#14885](https://github.com/netbox-community/netbox/issues/14885) - Add missing "group" field to VPN tunnel creation form
* [#14892](https://github.com/netbox-community/netbox/issues/14892) - Fix exception when running report/script via command line due to missing username
* [#14920](https://github.com/netbox-community/netbox/issues/14920) - Include button to display available status choices when bulk importing virtual device contexts
* [#14945](https://github.com/netbox-community/netbox/issues/14945) - Fix "select all" button for device type components
* [#14947](https://github.com/netbox-community/netbox/issues/14947) - Ensure that application & removal of tags is always recorded in an object's change log
* [#14962](https://github.com/netbox-community/netbox/issues/14962) - Fix config context rendering for VMs assigned directly to a site (rather than via a cluster)
* [#14999](https://github.com/netbox-community/netbox/issues/14999) - Fix "create & add another" link for interface FHRP group assignment
* [#15015](https://github.com/netbox-community/netbox/issues/15015) - Pre-populate assigned tenant when allocating next available IP address under prefix view
* [#15020](https://github.com/netbox-community/netbox/issues/15020) - Automatically update all VMs when changing a cluster's assigned site
* [#15025](https://github.com/netbox-community/netbox/issues/15025) - The `can_add()` template filter should accept a model (not an instance)
---
## v3.7.1 (2024-01-17)
### Bug Fixes
* [#13844](https://github.com/netbox-community/netbox/issues/13844) - Use `available_at_site` filter when filtering VLANs under prefix form
* [#14663](https://github.com/netbox-community/netbox/issues/14663) - Fix tunnel creation when setting initial termination to a VM interface
* [#14706](https://github.com/netbox-community/netbox/issues/14706) - Relax one-to-one mapping of tunnel termination to IP address
* [#14709](https://github.com/netbox-community/netbox/issues/14709) - Fix typo in tunnel termination type choice name
* [#14749](https://github.com/netbox-community/netbox/issues/14749) - Remove errant translation wrapper from `installed_device` on DeviceBay
* [#14778](https://github.com/netbox-community/netbox/issues/14778) - Custom field API serializer should accept null values for all optional fields
* [#14791](https://github.com/netbox-community/netbox/issues/14791) - Hide available prefixes when searching within a parent prefix
* [#14793](https://github.com/netbox-community/netbox/issues/14793) - Add missing Diffie-Hellman group 15
* [#14816](https://github.com/netbox-community/netbox/issues/14816) - Ensure default contact assignment ordering is consistent
* [#14817](https://github.com/netbox-community/netbox/issues/14817) - Relax required fields for IKE & IPSec models on bulk import
* [#14827](https://github.com/netbox-community/netbox/issues/14827) - Ensure all matching event rules are processed in response to an event
---
## v3.7.0 (2023-12-29)
### Breaking Changes

View File

@ -286,6 +286,7 @@ nav:
- User Preferences: 'development/user-preferences.md'
- Web UI: 'development/web-ui.md'
- Internationalization: 'development/internationalization.md'
- Translations: 'development/translations.md'
- Release Checklist: 'development/release-checklist.md'
- git Cheat Sheet: 'development/git-cheat-sheet.md'
- Release Notes:

View File

@ -36,7 +36,7 @@ class DataSourceSerializer(NetBoxModelSerializer):
model = DataSource
fields = [
'id', 'url', 'display', 'name', 'type', 'source_url', 'enabled', 'status', 'description', 'comments',
'parameters', 'ignore_rules', 'created', 'last_updated', 'file_count',
'parameters', 'ignore_rules', 'custom_fields', 'created', 'last_updated', 'file_count',
]

26
netbox/core/constants.py Normal file
View File

@ -0,0 +1,26 @@
from dataclasses import dataclass
from django.utils.translation import gettext_lazy as _
from rq.job import JobStatus
__all__ = (
'RQ_TASK_STATUSES',
)
@dataclass
class Status:
label: str
color: str
RQ_TASK_STATUSES = {
JobStatus.QUEUED: Status(_('Queued'), 'cyan'),
JobStatus.FINISHED: Status(_('Finished'), 'green'),
JobStatus.FAILED: Status(_('Failed'), 'red'),
JobStatus.STARTED: Status(_('Started'), 'blue'),
JobStatus.DEFERRED: Status(_('Deferred'), 'gray'),
JobStatus.SCHEDULED: Status(_('Scheduled'), 'purple'),
JobStatus.STOPPED: Status(_('Stopped'), 'orange'),
JobStatus.CANCELED: Status(_('Cancelled'), 'yellow'),
}

View File

@ -21,7 +21,7 @@ class DataSourceBulkEditForm(NetBoxModelBulkEditForm):
enabled = forms.NullBooleanField(
required=False,
widget=BulkEditNullBooleanSelect(),
label=_('Enforce unique space')
label=_('Enabled')
)
description = forms.CharField(
label=_('Description'),

View File

@ -9,9 +9,9 @@ class Command(_Command):
"""
This built-in management command enables the creation of new database schema migration files, which should
never be required by and ordinary user. We prevent this command from executing unless the configuration
indicates that the user is a developer (i.e. configuration.DEVELOPER == True).
indicates that the user is a developer (i.e. configuration.DEVELOPER == True), or it was run with --check.
"""
if not settings.DEVELOPER:
if not kwargs['check_changes'] and not settings.DEVELOPER:
raise CommandError(
"This command is available for development purposes only. It will\n"
"NOT resolve any issues with missing or unapplied migrations. For assistance,\n"

View File

@ -14,6 +14,7 @@ from django.utils import timezone
from django.utils.module_loading import import_string
from django.utils.translation import gettext as _
from netbox.constants import CENSOR_TOKEN, CENSOR_TOKEN_CHANGED
from netbox.models import PrimaryModel
from netbox.models.features import JobsMixin
from netbox.registry import registry
@ -130,6 +131,28 @@ class DataSource(JobsMixin, PrimaryModel):
'source_url': f"URLs for local sources must start with file:// (or specify no scheme)"
})
def to_objectchange(self, action):
objectchange = super().to_objectchange(action)
# Censor any backend parameters marked as sensitive in the serialized data
pre_change_params = {}
post_change_params = {}
if objectchange.prechange_data:
pre_change_params = objectchange.prechange_data.get('parameters') or {} # parameters may be None
if objectchange.postchange_data:
post_change_params = objectchange.postchange_data.get('parameters') or {}
for param in self.backend_class.sensitive_parameters:
if post_change_params.get(param):
if post_change_params[param] != pre_change_params.get(param):
# Set the "changed" token if the parameter's value has been modified
post_change_params[param] = CENSOR_TOKEN_CHANGED
else:
post_change_params[param] = CENSOR_TOKEN
if pre_change_params.get(param):
pre_change_params[param] = CENSOR_TOKEN
return objectchange
def enqueue_sync_job(self, request):
"""
Enqueue a background job to synchronize the DataSource by calling sync().

View File

@ -1,4 +1,5 @@
from .config import *
from .data import *
from .jobs import *
from .tasks import *
from .plugins import *

View File

@ -1,9 +1,12 @@
import django_tables2 as tables
from django.utils.safestring import mark_safe
from core.constants import RQ_TASK_STATUSES
from netbox.registry import registry
__all__ = (
'BackendTypeColumn',
'RQJobStatusColumn',
)
@ -18,3 +21,16 @@ class BackendTypeColumn(tables.Column):
def value(self, value):
return value
class RQJobStatusColumn(tables.Column):
"""
Render a colored label for the status of an RQ job.
"""
def render(self, value):
status = RQ_TASK_STATUSES.get(value)
return mark_safe(f'<span class="badge text-bg-{status.color}">{status.label}</span>')
def value(self, value):
status = RQ_TASK_STATUSES.get(value)
return status.label

134
netbox/core/tables/tasks.py Normal file
View File

@ -0,0 +1,134 @@
import django_tables2 as tables
from django.utils.translation import gettext_lazy as _
from django_tables2.utils import A
from core.tables.columns import RQJobStatusColumn
from netbox.tables import BaseTable
class BackgroundQueueTable(BaseTable):
name = tables.Column(
verbose_name=_("Name")
)
jobs = tables.Column(
linkify=("core:background_task_list", [A("index"), "queued"]),
verbose_name=_("Queued")
)
oldest_job_timestamp = tables.Column(
verbose_name=_("Oldest Task")
)
started_jobs = tables.Column(
linkify=("core:background_task_list", [A("index"), "started"]),
verbose_name=_("Active")
)
deferred_jobs = tables.Column(
linkify=("core:background_task_list", [A("index"), "deferred"]),
verbose_name=_("Deferred")
)
finished_jobs = tables.Column(
linkify=("core:background_task_list", [A("index"), "finished"]),
verbose_name=_("Finished")
)
failed_jobs = tables.Column(
linkify=("core:background_task_list", [A("index"), "failed"]),
verbose_name=_("Failed")
)
scheduled_jobs = tables.Column(
linkify=("core:background_task_list", [A("index"), "scheduled"]),
verbose_name=_("Scheduled")
)
workers = tables.Column(
linkify=("core:worker_list", [A("index")]),
verbose_name=_("Workers")
)
host = tables.Column(
accessor="connection_kwargs__host",
verbose_name=_("Host")
)
port = tables.Column(
accessor="connection_kwargs__port",
verbose_name=_("Port")
)
db = tables.Column(
accessor="connection_kwargs__db",
verbose_name=_("DB")
)
pid = tables.Column(
accessor="scheduler__pid",
verbose_name=_("Scheduler PID")
)
class Meta(BaseTable.Meta):
empty_text = _('No queues found')
fields = (
'name', 'jobs', 'oldest_job_timestamp', 'started_jobs', 'deferred_jobs', 'finished_jobs', 'failed_jobs',
'scheduled_jobs', 'workers', 'host', 'port', 'db', 'pid',
)
default_columns = (
'name', 'jobs', 'started_jobs', 'deferred_jobs', 'finished_jobs', 'failed_jobs', 'scheduled_jobs',
'workers',
)
class BackgroundTaskTable(BaseTable):
id = tables.Column(
linkify=("core:background_task", [A("id")]),
verbose_name=_("ID")
)
created_at = tables.DateTimeColumn(
verbose_name=_("Created")
)
enqueued_at = tables.DateTimeColumn(
verbose_name=_("Enqueued")
)
ended_at = tables.DateTimeColumn(
verbose_name=_("Ended")
)
status = RQJobStatusColumn(
verbose_name=_("Status"),
accessor='get_status'
)
callable = tables.Column(
empty_values=(),
verbose_name=_("Callable")
)
class Meta(BaseTable.Meta):
empty_text = _('No tasks found')
fields = (
'id', 'created_at', 'enqueued_at', 'ended_at', 'status', 'callable',
)
default_columns = (
'id', 'created_at', 'enqueued_at', 'ended_at', 'status', 'callable',
)
def render_callable(self, value, record):
try:
return record.func_name
except Exception as e:
return repr(e)
class WorkerTable(BaseTable):
name = tables.Column(
linkify=("core:worker", [A("name")]),
verbose_name=_("Name")
)
state = tables.Column(
verbose_name=_("State")
)
birth_date = tables.DateTimeColumn(
verbose_name=_("Birth")
)
pid = tables.Column(
verbose_name=_("PID")
)
class Meta(BaseTable.Meta):
empty_text = _('No workers found')
fields = (
'name', 'state', 'birth_date', 'pid',
)
default_columns = (
'name', 'state', 'birth_date', 'pid',
)

View File

@ -0,0 +1,122 @@
from django.test import TestCase
from core.models import DataSource
from extras.choices import ObjectChangeActionChoices
from netbox.constants import CENSOR_TOKEN, CENSOR_TOKEN_CHANGED
class DataSourceChangeLoggingTestCase(TestCase):
def test_password_added_on_create(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/',
parameters={
'username': 'jeff',
'password': 'foobar123',
}
)
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_CREATE)
self.assertIsNone(objectchange.prechange_data)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN_CHANGED)
def test_password_added_on_update(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/'
)
datasource.snapshot()
# Add a blank password
datasource.parameters = {
'username': 'jeff',
'password': '',
}
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertIsNone(objectchange.prechange_data['parameters'])
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], '')
# Add a password
datasource.parameters = {
'username': 'jeff',
'password': 'foobar123',
}
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN_CHANGED)
def test_password_changed(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/',
parameters={
'username': 'jeff',
'password': 'password1',
}
)
datasource.snapshot()
# Change the password
datasource.parameters['password'] = 'password2'
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.prechange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.prechange_data['parameters']['password'], CENSOR_TOKEN)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN_CHANGED)
def test_password_removed_on_update(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/',
parameters={
'username': 'jeff',
'password': 'foobar123',
}
)
datasource.snapshot()
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.prechange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.prechange_data['parameters']['password'], CENSOR_TOKEN)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN)
# Remove the password
datasource.parameters['password'] = ''
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.prechange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.prechange_data['parameters']['password'], CENSOR_TOKEN)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'jeff')
self.assertEqual(objectchange.postchange_data['parameters']['password'], '')
def test_password_not_modified(self):
datasource = DataSource.objects.create(
name='Data Source 1',
type='git',
source_url='http://localhost/',
parameters={
'username': 'username1',
'password': 'foobar123',
}
)
datasource.snapshot()
# Remove the password
datasource.parameters['username'] = 'username2'
objectchange = datasource.to_objectchange(ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(objectchange.prechange_data['parameters']['username'], 'username1')
self.assertEqual(objectchange.prechange_data['parameters']['password'], CENSOR_TOKEN)
self.assertEqual(objectchange.postchange_data['parameters']['username'], 'username2')
self.assertEqual(objectchange.postchange_data['parameters']['password'], CENSOR_TOKEN)

View File

@ -1,6 +1,16 @@
from django.utils import timezone
import logging
import uuid
from datetime import datetime
from utilities.testing import ViewTestCases, create_tags
from django.urls import reverse
from django.utils import timezone
from django_rq import get_queue
from django_rq.settings import QUEUES_MAP
from django_rq.workers import get_worker
from rq.job import Job as RQ_Job, JobStatus
from rq.registry import DeferredJobRegistry, FailedJobRegistry, FinishedJobRegistry, StartedJobRegistry
from utilities.testing import TestCase, ViewTestCases, create_tags
from ..models import *
@ -87,3 +97,211 @@ class DataFileTestCase(
),
)
DataFile.objects.bulk_create(data_files)
class BackgroundTaskTestCase(TestCase):
user_permissions = ()
# Dummy worker functions
@staticmethod
def dummy_job_default():
return "Job finished"
@staticmethod
def dummy_job_high():
return "Job finished"
@staticmethod
def dummy_job_failing():
raise Exception("Job failed")
def setUp(self):
super().setUp()
self.user.is_staff = True
self.user.is_active = True
self.user.save()
# Clear all queues prior to running each test
get_queue('default').connection.flushall()
get_queue('high').connection.flushall()
get_queue('low').connection.flushall()
def test_background_queue_list(self):
url = reverse('core:background_queue_list')
# Attempt to load view without permission
self.user.is_staff = False
self.user.save()
response = self.client.get(url)
self.assertEqual(response.status_code, 403)
# Load view with permission
self.user.is_staff = True
self.user.save()
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
self.assertIn('default', str(response.content))
self.assertIn('high', str(response.content))
self.assertIn('low', str(response.content))
def test_background_tasks_list_default(self):
queue = get_queue('default')
queue.enqueue(self.dummy_job_default)
queue_index = QUEUES_MAP['default']
response = self.client.get(reverse('core:background_task_list', args=[queue_index, 'queued']))
self.assertEqual(response.status_code, 200)
self.assertIn('BackgroundTaskTestCase.dummy_job_default', str(response.content))
def test_background_tasks_list_high(self):
queue = get_queue('high')
queue.enqueue(self.dummy_job_high)
queue_index = QUEUES_MAP['high']
response = self.client.get(reverse('core:background_task_list', args=[queue_index, 'queued']))
self.assertEqual(response.status_code, 200)
self.assertIn('BackgroundTaskTestCase.dummy_job_high', str(response.content))
def test_background_tasks_list_finished(self):
queue = get_queue('default')
job = queue.enqueue(self.dummy_job_default)
queue_index = QUEUES_MAP['default']
registry = FinishedJobRegistry(queue.name, queue.connection)
registry.add(job, 2)
response = self.client.get(reverse('core:background_task_list', args=[queue_index, 'finished']))
self.assertEqual(response.status_code, 200)
self.assertIn('BackgroundTaskTestCase.dummy_job_default', str(response.content))
def test_background_tasks_list_failed(self):
queue = get_queue('default')
job = queue.enqueue(self.dummy_job_default)
queue_index = QUEUES_MAP['default']
registry = FailedJobRegistry(queue.name, queue.connection)
registry.add(job, 2)
response = self.client.get(reverse('core:background_task_list', args=[queue_index, 'failed']))
self.assertEqual(response.status_code, 200)
self.assertIn('BackgroundTaskTestCase.dummy_job_default', str(response.content))
def test_background_tasks_scheduled(self):
queue = get_queue('default')
queue.enqueue_at(datetime.now(), self.dummy_job_default)
queue_index = QUEUES_MAP['default']
response = self.client.get(reverse('core:background_task_list', args=[queue_index, 'scheduled']))
self.assertEqual(response.status_code, 200)
self.assertIn('BackgroundTaskTestCase.dummy_job_default', str(response.content))
def test_background_tasks_list_deferred(self):
queue = get_queue('default')
job = queue.enqueue(self.dummy_job_default)
queue_index = QUEUES_MAP['default']
registry = DeferredJobRegistry(queue.name, queue.connection)
registry.add(job, 2)
response = self.client.get(reverse('core:background_task_list', args=[queue_index, 'deferred']))
self.assertEqual(response.status_code, 200)
self.assertIn('BackgroundTaskTestCase.dummy_job_default', str(response.content))
def test_background_task(self):
queue = get_queue('default')
job = queue.enqueue(self.dummy_job_default)
response = self.client.get(reverse('core:background_task', args=[job.id]))
self.assertEqual(response.status_code, 200)
self.assertIn('Background Tasks', str(response.content))
self.assertIn(str(job.id), str(response.content))
self.assertIn('Callable', str(response.content))
self.assertIn('Meta', str(response.content))
self.assertIn('Keyword Arguments', str(response.content))
def test_background_task_delete(self):
queue = get_queue('default')
job = queue.enqueue(self.dummy_job_default)
response = self.client.post(reverse('core:background_task_delete', args=[job.id]), {'confirm': True})
self.assertEqual(response.status_code, 302)
self.assertFalse(RQ_Job.exists(job.id, connection=queue.connection))
self.assertNotIn(job.id, queue.job_ids)
def test_background_task_requeue(self):
queue = get_queue('default')
# Enqueue & run a job that will fail
job = queue.enqueue(self.dummy_job_failing)
worker = get_worker('default')
worker.work(burst=True)
self.assertTrue(job.is_failed)
# Re-enqueue the failed job and check that its status has been reset
response = self.client.get(reverse('core:background_task_requeue', args=[job.id]))
self.assertEqual(response.status_code, 302)
self.assertFalse(job.is_failed)
def test_background_task_enqueue(self):
queue = get_queue('default')
# Enqueue some jobs that each depends on its predecessor
job = previous_job = None
for _ in range(0, 3):
job = queue.enqueue(self.dummy_job_default, depends_on=previous_job)
previous_job = job
# Check that the last job to be enqueued has a status of deferred
self.assertIsNotNone(job)
self.assertEqual(job.get_status(), JobStatus.DEFERRED)
self.assertIsNone(job.enqueued_at)
# Force-enqueue the deferred job
response = self.client.get(reverse('core:background_task_enqueue', args=[job.id]))
self.assertEqual(response.status_code, 302)
# Check that job's status is updated correctly
job = queue.fetch_job(job.id)
self.assertEqual(job.get_status(), JobStatus.QUEUED)
self.assertIsNotNone(job.enqueued_at)
def test_background_task_stop(self):
queue = get_queue('default')
worker = get_worker('default')
job = queue.enqueue(self.dummy_job_default)
worker.prepare_job_execution(job)
self.assertEqual(job.get_status(), JobStatus.STARTED)
# Stop those jobs using the view
started_job_registry = StartedJobRegistry(queue.name, connection=queue.connection)
self.assertEqual(len(started_job_registry), 1)
response = self.client.get(reverse('core:background_task_stop', args=[job.id]))
self.assertEqual(response.status_code, 302)
worker.monitor_work_horse(job, queue) # Sets the job as Failed and removes from Started
self.assertEqual(len(started_job_registry), 0)
canceled_job_registry = FailedJobRegistry(queue.name, connection=queue.connection)
self.assertEqual(len(canceled_job_registry), 1)
self.assertIn(job.id, canceled_job_registry)
def test_worker_list(self):
worker1 = get_worker('default', name=uuid.uuid4().hex)
worker1.register_birth()
worker2 = get_worker('high')
worker2.register_birth()
queue_index = QUEUES_MAP['default']
response = self.client.get(reverse('core:worker_list', args=[queue_index]))
self.assertEqual(response.status_code, 200)
self.assertIn(str(worker1.name), str(response.content))
self.assertNotIn(str(worker2.name), str(response.content))
def test_worker(self):
worker1 = get_worker('default', name=uuid.uuid4().hex)
worker1.register_birth()
response = self.client.get(reverse('core:worker', args=[worker1.name]))
self.assertEqual(response.status_code, 200)
self.assertIn(str(worker1.name), str(response.content))
self.assertIn('Birth', str(response.content))
self.assertIn('Total working time', str(response.content))

View File

@ -25,6 +25,17 @@ urlpatterns = (
path('jobs/<int:pk>/', views.JobView.as_view(), name='job'),
path('jobs/<int:pk>/delete/', views.JobDeleteView.as_view(), name='job_delete'),
# Background Tasks
path('background-queues/', views.BackgroundQueueListView.as_view(), name='background_queue_list'),
path('background-queues/<int:queue_index>/<str:status>/', views.BackgroundTaskListView.as_view(), name='background_task_list'),
path('background-tasks/<str:job_id>/', views.BackgroundTaskView.as_view(), name='background_task'),
path('background-tasks/<str:job_id>/delete/', views.BackgroundTaskDeleteView.as_view(), name='background_task_delete'),
path('background-tasks/<str:job_id>/requeue/', views.BackgroundTaskRequeueView.as_view(), name='background_task_requeue'),
path('background-tasks/<str:job_id>/enqueue/', views.BackgroundTaskEnqueueView.as_view(), name='background_task_enqueue'),
path('background-tasks/<str:job_id>/stop/', views.BackgroundTaskStopView.as_view(), name='background_task_stop'),
path('background-workers/<int:queue_index>/', views.WorkerListView.as_view(), name='worker_list'),
path('background-workers/<str:key>/', views.WorkerView.as_view(), name='worker'),
# Config revisions
path('config-revisions/', views.ConfigRevisionListView.as_view(), name='configrevision_list'),
path('config-revisions/add/', views.ConfigRevisionEditView.as_view(), name='configrevision_add'),

View File

@ -3,13 +3,28 @@ from django.conf import settings
from django.contrib import messages
from django.contrib.auth.mixins import UserPassesTestMixin
from django.core.cache import cache
from django.http import HttpResponseForbidden
from django.http import HttpResponseForbidden, Http404
from django.shortcuts import get_object_or_404, redirect, render
from django.urls import reverse
from django.utils.translation import gettext_lazy as _
from django.views.generic import View
from django_rq.queues import get_queue_by_index, get_redis_connection
from django_rq.settings import QUEUES_MAP, QUEUES_LIST
from django_rq.utils import get_jobs, get_statistics, stop_jobs
from rq import requeue_job
from rq.exceptions import NoSuchJobError
from rq.job import Job as RQ_Job, JobStatus as RQJobStatus
from rq.registry import (
DeferredJobRegistry, FailedJobRegistry, FinishedJobRegistry, ScheduledJobRegistry, StartedJobRegistry,
)
from rq.worker import Worker
from rq.worker_registration import clean_worker_registry
from netbox.config import get_config, PARAMS
from netbox.views import generic
from netbox.views.generic.base import BaseObjectView
from netbox.views.generic.mixins import TableMixin
from utilities.forms import ConfirmationForm
from utilities.utils import count_related
from utilities.views import ContentTypePermissionRequiredMixin, register_model_view
from . import filtersets, forms, tables
@ -237,6 +252,276 @@ class ConfigRevisionRestoreView(ContentTypePermissionRequiredMixin, View):
return redirect(candidate_config.get_absolute_url())
#
# Background Tasks (RQ)
#
class BaseRQView(UserPassesTestMixin, View):
def test_func(self):
return self.request.user.is_staff
class BackgroundQueueListView(TableMixin, BaseRQView):
table = tables.BackgroundQueueTable
def get(self, request):
data = get_statistics(run_maintenance_tasks=True)["queues"]
table = self.get_table(data, request, bulk_actions=False)
return render(request, 'core/rq_queue_list.html', {
'table': table,
})
class BackgroundTaskListView(TableMixin, BaseRQView):
table = tables.BackgroundTaskTable
def get_table_data(self, request, queue, status):
jobs = []
# Call get_jobs() to returned queued tasks
if status == RQJobStatus.QUEUED:
return queue.get_jobs()
# For other statuses, determine the registry to list (or raise a 404 for invalid statuses)
try:
registry_cls = {
RQJobStatus.STARTED: StartedJobRegistry,
RQJobStatus.DEFERRED: DeferredJobRegistry,
RQJobStatus.FINISHED: FinishedJobRegistry,
RQJobStatus.FAILED: FailedJobRegistry,
RQJobStatus.SCHEDULED: ScheduledJobRegistry,
}[status]
except KeyError:
raise Http404
registry = registry_cls(queue.name, queue.connection)
job_ids = registry.get_job_ids()
if status != RQJobStatus.DEFERRED:
jobs = get_jobs(queue, job_ids, registry)
else:
# Deferred jobs require special handling
for job_id in job_ids:
try:
jobs.append(RQ_Job.fetch(job_id, connection=queue.connection, serializer=queue.serializer))
except NoSuchJobError:
pass
if jobs and status == RQJobStatus.SCHEDULED:
for job in jobs:
job.scheduled_at = registry.get_scheduled_time(job)
return jobs
def get(self, request, queue_index, status):
queue = get_queue_by_index(queue_index)
data = self.get_table_data(request, queue, status)
table = self.get_table(data, request, False)
# If this is an HTMX request, return only the rendered table HTML
if request.htmx:
return render(request, 'htmx/table.html', {
'table': table,
})
return render(request, 'core/rq_task_list.html', {
'table': table,
'queue': queue,
'status': status,
})
class BackgroundTaskView(BaseRQView):
def get(self, request, job_id):
# all the RQ queues should use the same connection
config = QUEUES_LIST[0]
try:
job = RQ_Job.fetch(job_id, connection=get_redis_connection(config['connection_config']),)
except NoSuchJobError:
raise Http404(_("Job {job_id} not found").format(job_id=job_id))
queue_index = QUEUES_MAP[job.origin]
queue = get_queue_by_index(queue_index)
try:
exc_info = job._exc_info
except AttributeError:
exc_info = None
return render(request, 'core/rq_task.html', {
'queue': queue,
'job': job,
'queue_index': queue_index,
'dependency_id': job._dependency_id,
'exc_info': exc_info,
})
class BackgroundTaskDeleteView(BaseRQView):
def get(self, request, job_id):
if not request.htmx:
return redirect(reverse('core:background_queue_list'))
form = ConfirmationForm(initial=request.GET)
return render(request, 'htmx/delete_form.html', {
'object_type': 'background task',
'object': job_id,
'form': form,
'form_url': reverse('core:background_task_delete', kwargs={'job_id': job_id})
})
def post(self, request, job_id):
form = ConfirmationForm(request.POST)
if form.is_valid():
# all the RQ queues should use the same connection
config = QUEUES_LIST[0]
try:
job = RQ_Job.fetch(job_id, connection=get_redis_connection(config['connection_config']),)
except NoSuchJobError:
raise Http404(_("Job {job_id} not found").format(job_id=job_id))
queue_index = QUEUES_MAP[job.origin]
queue = get_queue_by_index(queue_index)
# Remove job id from queue and delete the actual job
queue.connection.lrem(queue.key, 0, job.id)
job.delete()
messages.success(request, f'Deleted job {job_id}')
else:
messages.error(request, f'Error deleting job: {form.errors[0]}')
return redirect(reverse('core:background_queue_list'))
class BackgroundTaskRequeueView(BaseRQView):
def get(self, request, job_id):
# all the RQ queues should use the same connection
config = QUEUES_LIST[0]
try:
job = RQ_Job.fetch(job_id, connection=get_redis_connection(config['connection_config']),)
except NoSuchJobError:
raise Http404(_("Job {job_id} not found").format(job_id=job_id))
queue_index = QUEUES_MAP[job.origin]
queue = get_queue_by_index(queue_index)
requeue_job(job_id, connection=queue.connection, serializer=queue.serializer)
messages.success(request, f'You have successfully requeued: {job_id}')
return redirect(reverse('core:background_task', args=[job_id]))
class BackgroundTaskEnqueueView(BaseRQView):
def get(self, request, job_id):
# all the RQ queues should use the same connection
config = QUEUES_LIST[0]
try:
job = RQ_Job.fetch(job_id, connection=get_redis_connection(config['connection_config']),)
except NoSuchJobError:
raise Http404(_("Job {job_id} not found").format(job_id=job_id))
queue_index = QUEUES_MAP[job.origin]
queue = get_queue_by_index(queue_index)
try:
# _enqueue_job is new in RQ 1.14, this is used to enqueue
# job regardless of its dependencies
queue._enqueue_job(job)
except AttributeError:
queue.enqueue_job(job)
# Remove job from correct registry if needed
if job.get_status() == RQJobStatus.DEFERRED:
registry = DeferredJobRegistry(queue.name, queue.connection)
registry.remove(job)
elif job.get_status() == RQJobStatus.FINISHED:
registry = FinishedJobRegistry(queue.name, queue.connection)
registry.remove(job)
elif job.get_status() == RQJobStatus.SCHEDULED:
registry = ScheduledJobRegistry(queue.name, queue.connection)
registry.remove(job)
messages.success(request, f'You have successfully enqueued: {job_id}')
return redirect(reverse('core:background_task', args=[job_id]))
class BackgroundTaskStopView(BaseRQView):
def get(self, request, job_id):
# all the RQ queues should use the same connection
config = QUEUES_LIST[0]
try:
job = RQ_Job.fetch(job_id, connection=get_redis_connection(config['connection_config']),)
except NoSuchJobError:
raise Http404(_("Job {job_id} not found").format(job_id=job_id))
queue_index = QUEUES_MAP[job.origin]
queue = get_queue_by_index(queue_index)
stopped, _ = stop_jobs(queue, job_id)
if len(stopped) == 1:
messages.success(request, f'You have successfully stopped {job_id}')
else:
messages.error(request, f'Failed to stop {job_id}')
return redirect(reverse('core:background_task', args=[job_id]))
class WorkerListView(TableMixin, BaseRQView):
table = tables.WorkerTable
def get_table_data(self, request, queue):
clean_worker_registry(queue)
all_workers = Worker.all(queue.connection)
workers = [worker for worker in all_workers if queue.name in worker.queue_names()]
return workers
def get(self, request, queue_index):
queue = get_queue_by_index(queue_index)
data = self.get_table_data(request, queue)
table = self.get_table(data, request, False)
# If this is an HTMX request, return only the rendered table HTML
if request.htmx:
if request.htmx.target != 'object_list':
table.embedded = True
# Hide selection checkboxes
if 'pk' in table.base_columns:
table.columns.hide('pk')
return render(request, 'htmx/table.html', {
'table': table,
'queue': queue,
})
return render(request, 'core/rq_worker_list.html', {
'table': table,
'queue': queue,
})
class WorkerView(BaseRQView):
def get(self, request, key):
# all the RQ queues should use the same connection
config = QUEUES_LIST[0]
worker = Worker.find_by_key('rq:worker:' + key, connection=get_redis_connection(config['connection_config']))
# Convert microseconds to milliseconds
worker.total_working_time = worker.total_working_time / 1000
return render(request, 'core/rq_worker.html', {
'worker': worker,
'job': worker.get_current_job(),
'total_working_time': worker.total_working_time * 1000,
})
#
# Plugins
#

View File

@ -727,7 +727,7 @@ class PowerOutletImportForm(NetBoxModelImportForm):
help_text=_('Local power port which feeds this outlet')
)
feed_leg = CSVChoiceField(
label=_('Feed lag'),
label=_('Feed leg'),
choices=PowerOutletFeedLegChoices,
required=False,
help_text=_('Electrical phase (for three-phase circuits)')
@ -1359,6 +1359,10 @@ class VirtualDeviceContextImportForm(NetBoxModelImportForm):
to_field_name='name',
help_text='Assigned tenant'
)
status = CSVChoiceField(
label=_('Status'),
choices=VirtualDeviceContextStatusChoices,
)
class Meta:
fields = [

View File

@ -1,6 +1,6 @@
import graphene
from circuits.graphql.types import CircuitTerminationType
from circuits.models import CircuitTermination
from circuits.graphql.types import CircuitTerminationType, ProviderNetworkType
from circuits.models import CircuitTermination, ProviderNetwork
from dcim.graphql.types import (
ConsolePortTemplateType,
ConsolePortType,
@ -167,3 +167,42 @@ class InventoryItemComponentType(graphene.Union):
return PowerPortType
if type(instance) is RearPort:
return RearPortType
class ConnectedEndpointType(graphene.Union):
class Meta:
types = (
CircuitTerminationType,
ConsolePortType,
ConsoleServerPortType,
FrontPortType,
InterfaceType,
PowerFeedType,
PowerOutletType,
PowerPortType,
ProviderNetworkType,
RearPortType,
)
@classmethod
def resolve_type(cls, instance, info):
if type(instance) is CircuitTermination:
return CircuitTerminationType
if type(instance) is ConsolePortType:
return ConsolePortType
if type(instance) is ConsoleServerPort:
return ConsoleServerPortType
if type(instance) is FrontPort:
return FrontPortType
if type(instance) is Interface:
return InterfaceType
if type(instance) is PowerFeed:
return PowerFeedType
if type(instance) is PowerOutlet:
return PowerOutletType
if type(instance) is PowerPort:
return PowerPortType
if type(instance) is ProviderNetwork:
return ProviderNetworkType
if type(instance) is RearPort:
return RearPortType

View File

@ -13,7 +13,7 @@ class CabledObjectMixin:
class PathEndpointMixin:
connected_endpoints = graphene.List('dcim.graphql.gfk_mixins.LinkPeerType')
connected_endpoints = graphene.List('dcim.graphql.gfk_mixins.ConnectedEndpointType')
def resolve_connected_endpoints(self, info):
# Handle empty values

View File

@ -1115,7 +1115,7 @@ class DeviceBay(ComponentModel, TrackingModelMixin):
installed_device = models.OneToOneField(
to='dcim.Device',
on_delete=models.SET_NULL,
related_name=_('parent_bay'),
related_name='parent_bay',
blank=True,
null=True
)

View File

@ -35,13 +35,17 @@ DEVICEBAY_STATUS = """
"""
INTERFACE_IPADDRESSES = """
{% for ip in value.all %}
{% if ip.status != 'active' %}
<a href="{{ ip.get_absolute_url }}" class="badge text-bg-{{ ip.get_status_color }}" data-bs-toggle="tooltip" data-bs-placement="left" title="{{ ip.get_status_display }}">{{ ip }}</a>
{% else %}
<a href="{{ ip.get_absolute_url }}">{{ ip }}</a>
{% endif %}
{% endfor %}
{% if value.count > 3 %}
<a href="{% url 'ipam:ipaddress_list' %}?interface_id={{ record.pk }}">{{ value.count }}</a>
{% else %}
{% for ip in value.all %}
{% if ip.status != 'active' %}
<a href="{{ ip.get_absolute_url }}" class="badge text-bg-{{ ip.get_status_color }}" data-bs-toggle="tooltip" data-bs-placement="left" title="{{ ip.get_status_display }}">{{ ip }}</a>
{% else %}
<a href="{{ ip.get_absolute_url }}">{{ ip }}</a>
{% endif %}
{% endfor %}
{% endif %}
"""
INTERFACE_FHRPGROUPS = """

View File

@ -58,7 +58,11 @@ class DeviceComponentsView(generic.ObjectChildrenView):
return self.child_model.objects.restrict(request.user, 'view').filter(device=parent)
class DeviceTypeComponentsView(DeviceComponentsView):
class DeviceTypeComponentsView(generic.ObjectChildrenView):
actions = {
**DEFAULT_ACTION_PERMISSIONS,
'bulk_rename': {'change'},
}
queryset = DeviceType.objects.all()
template_name = 'dcim/devicetype/component_templates.html'
viewname = None # Used for return_url resolution
@ -2956,7 +2960,6 @@ class InventoryItemBulkDeleteView(generic.BulkDeleteView):
queryset = InventoryItem.objects.all()
filterset = filtersets.InventoryItemFilterSet
table = tables.InventoryItemTable
template_name = 'dcim/inventoryitem_bulk_delete.html'
@register_model_view(InventoryItem, 'children')

View File

@ -3,6 +3,7 @@ from django.core.exceptions import ObjectDoesNotExist
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import extend_schema_field
from rest_framework import serializers
from rest_framework.fields import ListField
from core.api.nested_serializers import NestedDataSourceSerializer, NestedDataFileSerializer, NestedJobSerializer
from core.api.serializers import JobSerializer
@ -125,11 +126,15 @@ class CustomFieldSerializer(ValidatedModelSerializer):
type = ChoiceField(choices=CustomFieldTypeChoices)
object_type = ContentTypeField(
queryset=ContentType.objects.all(),
required=False
required=False,
allow_null=True
)
filter_logic = ChoiceField(choices=CustomFieldFilterLogicChoices, required=False)
data_type = serializers.SerializerMethodField()
choice_set = NestedCustomFieldChoiceSetSerializer(required=False)
choice_set = NestedCustomFieldChoiceSetSerializer(
required=False,
allow_null=True
)
ui_visible = ChoiceField(choices=CustomFieldUIVisibleChoices, required=False)
ui_editable = ChoiceField(choices=CustomFieldUIEditableChoices, required=False)
@ -170,6 +175,12 @@ class CustomFieldChoiceSetSerializer(ValidatedModelSerializer):
choices=CustomFieldChoiceSetBaseChoices,
required=False
)
extra_choices = serializers.ListField(
child=serializers.ListField(
min_length=2,
max_length=2
)
)
class Meta:
model = CustomFieldChoiceSet

View File

@ -53,13 +53,13 @@ def get_dashboard(user):
return dashboard
def get_default_dashboard():
def get_default_dashboard(config=None):
from extras.models import Dashboard
dashboard = Dashboard()
default_config = settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD
config = config or settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD
for widget in default_config:
for widget in config:
id = str(uuid.uuid4())
dashboard.layout.append({
'id': id,

View File

@ -71,17 +71,17 @@ def enqueue_object(queue, instance, user, request_id, action):
})
def process_event_rules(event_rules, model_name, event, data, username, snapshots=None, request_id=None):
try:
def process_event_rules(event_rules, model_name, event, data, username=None, snapshots=None, request_id=None):
if username:
user = get_user_model().objects.get(username=username)
except ObjectDoesNotExist:
else:
user = None
for event_rule in event_rules:
# Evaluate event rule conditions (if any)
if not event_rule.eval_conditions(data):
return
continue
# Webhooks
if event_rule.action_type == EventRuleActionChoices.WEBHOOK:

View File

@ -142,10 +142,12 @@ class CustomLinkForm(forms.ModelForm):
}
help_texts = {
'link_text': _(
"Jinja2 template code for the link text. Reference the object as <code>{{ object }}</code>. Links "
"Jinja2 template code for the link text. Reference the object as {example}. Links "
"which render as empty text will not be displayed."
),
'link_url': _("Jinja2 template code for the link URL. Reference the object as <code>{{ object }}</code>."),
).format(example="<code>{{ object }}</code>"),
'link_url': _(
"Jinja2 template code for the link URL. Reference the object as {example}."
).format(example="<code>{{ object }}</code>"),
}

View File

@ -0,0 +1,21 @@
# Generated by Django 4.2.9 on 2024-01-19 19:46
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('extras', '0105_customfield_min_max_values'),
]
operations = [
migrations.AlterField(
model_name='bookmark',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
]

View File

@ -17,7 +17,7 @@ def migrate_report_jobs(apps, schema_editor):
class Migration(migrations.Migration):
dependencies = [
('extras', '0105_customfield_min_max_values'),
('extras', '0106_bookmark_user_cascade_deletion'),
]
operations = [

View File

@ -8,6 +8,16 @@ __all__ = (
class PythonModuleMixin:
def get_jobs(self, name):
"""
Returns a list of Jobs associated with this specific script or report module
:param name: The class name of the script or report
:return: List of Jobs associated with this
"""
return self.jobs.filter(
name=name
)
@property
def path(self):
return os.path.splitext(self.file_path)[0]

View File

@ -771,7 +771,7 @@ class Bookmark(models.Model):
)
user = models.ForeignKey(
to=settings.AUTH_USER_MODEL,
on_delete=models.PROTECT
on_delete=models.CASCADE
)
objects = RestrictedQuerySet.as_manager()

View File

@ -120,34 +120,29 @@ class ConfigContextModelQuerySet(RestrictedQuerySet):
if self.model._meta.model_name == 'device':
base_query.add((Q(locations=OuterRef('location')) | Q(locations=None)), Q.AND)
base_query.add((Q(device_types=OuterRef('device_type')) | Q(device_types=None)), Q.AND)
base_query.add((Q(roles=OuterRef('role')) | Q(roles=None)), Q.AND)
base_query.add((Q(sites=OuterRef('site')) | Q(sites=None)), Q.AND)
region_field = 'site__region'
sitegroup_field = 'site__group'
elif self.model._meta.model_name == 'virtualmachine':
base_query.add((Q(roles=OuterRef('role')) | Q(roles=None)), Q.AND)
base_query.add((Q(sites=OuterRef('cluster__site')) | Q(sites=None)), Q.AND)
base_query.add(Q(device_types=None), Q.AND)
region_field = 'cluster__site__region'
sitegroup_field = 'cluster__site__group'
base_query.add((Q(roles=OuterRef('role')) | Q(roles=None)), Q.AND)
base_query.add((Q(sites=OuterRef('site')) | Q(sites=None)), Q.AND)
base_query.add(
(Q(
regions__tree_id=OuterRef(f'{region_field}__tree_id'),
regions__level__lte=OuterRef(f'{region_field}__level'),
regions__lft__lte=OuterRef(f'{region_field}__lft'),
regions__rght__gte=OuterRef(f'{region_field}__rght'),
regions__tree_id=OuterRef('site__region__tree_id'),
regions__level__lte=OuterRef('site__region__level'),
regions__lft__lte=OuterRef('site__region__lft'),
regions__rght__gte=OuterRef('site__region__rght'),
) | Q(regions=None)),
Q.AND
)
base_query.add(
(Q(
site_groups__tree_id=OuterRef(f'{sitegroup_field}__tree_id'),
site_groups__level__lte=OuterRef(f'{sitegroup_field}__level'),
site_groups__lft__lte=OuterRef(f'{sitegroup_field}__lft'),
site_groups__rght__gte=OuterRef(f'{sitegroup_field}__rght'),
site_groups__tree_id=OuterRef('site__group__tree_id'),
site_groups__level__lte=OuterRef('site__group__level'),
site_groups__lft__lte=OuterRef('site__group__lft'),
site_groups__rght__gte=OuterRef('site__group__rght'),
) | Q(site_groups=None)),
Q.AND
)

View File

@ -68,21 +68,23 @@ def handle_changed_object(sender, instance, **kwargs):
else:
return
# Record an ObjectChange if applicable
if m2m_changed:
ObjectChange.objects.filter(
# Create/update an ObejctChange record for this change
objectchange = instance.to_objectchange(action)
# If this is a many-to-many field change, check for a previous ObjectChange instance recorded
# for this object by this request and update it
if m2m_changed and (
prev_change := ObjectChange.objects.filter(
changed_object_type=ContentType.objects.get_for_model(instance),
changed_object_id=instance.pk,
request_id=request.id
).update(
postchange_data=instance.to_objectchange(action).postchange_data
)
else:
objectchange = instance.to_objectchange(action)
if objectchange and objectchange.has_changes:
objectchange.user = request.user
objectchange.request_id = request.id
objectchange.save()
).first()
):
prev_change.postchange_data = objectchange.postchange_data
prev_change.save()
elif objectchange and objectchange.has_changes:
objectchange.user = request.user
objectchange.request_id = request.id
objectchange.save()
# If this is an M2M change, update the previously queued webhook (from post_save)
queue = events_queue.get()
@ -251,7 +253,8 @@ def process_job_start_event_rules(sender, **kwargs):
Process event rules for jobs starting.
"""
event_rules = EventRule.objects.filter(type_job_start=True, enabled=True, content_types=sender.object_type)
process_event_rules(event_rules, sender.object_type.model, EVENT_JOB_START, sender.data, sender.user.username)
username = sender.user.username if sender.user else None
process_event_rules(event_rules, sender.object_type.model, EVENT_JOB_START, sender.data, username)
@receiver(job_end)
@ -260,4 +263,5 @@ def process_job_end_event_rules(sender, **kwargs):
Process event rules for jobs terminating.
"""
event_rules = EventRule.objects.filter(type_job_end=True, enabled=True, content_types=sender.object_type)
process_event_rules(event_rules, sender.object_type.model, EVENT_JOB_END, sender.data, sender.user.username)
username = sender.user.username if sender.user else None
process_event_rules(event_rules, sender.object_type.model, EVENT_JOB_END, sender.data, username)

View File

@ -14,7 +14,6 @@ from extras.reports import Report
from extras.scripts import BooleanVar, IntegerVar, Script, StringVar
from utilities.testing import APITestCase, APIViewTestCases
User = get_user_model()
@ -251,6 +250,23 @@ class CustomFieldChoiceSetTest(APIViewTestCases.APIViewTestCase):
)
CustomFieldChoiceSet.objects.bulk_create(choice_sets)
def test_invalid_choice_items(self):
"""
Attempting to define each choice as a single-item list should return a 400 error.
"""
self.add_permissions('extras.add_customfieldchoiceset')
data = {
"name": "test",
"extra_choices": [
["choice1"],
["choice2"],
["choice3"],
]
}
response = self.client.post(self._get_list_url(), data, format='json', **self.header)
self.assertEqual(response.status_code, 400)
class CustomLinkTest(APIViewTestCases.APIViewTestCase):
model = CustomLink

View File

@ -270,7 +270,12 @@ class ConfigContextTest(TestCase):
tag = Tag.objects.first()
cluster_type = ClusterType.objects.create(name="Cluster Type")
cluster_group = ClusterGroup.objects.create(name="Cluster Group")
cluster = Cluster.objects.create(name="Cluster", group=cluster_group, type=cluster_type)
cluster = Cluster.objects.create(
name="Cluster",
group=cluster_group,
type=cluster_type,
site=site,
)
region_context = ConfigContext.objects.create(
name="region",
@ -354,6 +359,41 @@ class ConfigContextTest(TestCase):
annotated_queryset = VirtualMachine.objects.filter(name=virtual_machine.name).annotate_config_context_data()
self.assertEqual(virtual_machine.get_config_context(), annotated_queryset[0].get_config_context())
def test_virtualmachine_site_context(self):
"""
Check that config context associated with a site applies to a VM whether the VM is assigned
directly to that site or via its cluster.
"""
site = Site.objects.first()
cluster_type = ClusterType.objects.create(name="Cluster Type")
cluster = Cluster.objects.create(name="Cluster", type=cluster_type, site=site)
vm_role = DeviceRole.objects.first()
# Create a ConfigContext associated with the site
context = ConfigContext.objects.create(
name="context1",
weight=100,
data={"foo": True}
)
context.sites.add(site)
# Create one VM assigned directly to the site, and one assigned via the cluster
vm1 = VirtualMachine.objects.create(name="VM 1", site=site, role=vm_role)
vm2 = VirtualMachine.objects.create(name="VM 2", cluster=cluster, role=vm_role)
# Check that their individually-rendered config contexts are identical
self.assertEqual(
vm1.get_config_context(),
vm2.get_config_context()
)
# Check that their annotated config contexts are identical
vms = VirtualMachine.objects.filter(pk__in=(vm1.pk, vm2.pk)).annotate_config_context_data()
self.assertEqual(
vms[0].get_config_context(),
vms[1].get_config_context()
)
def test_multiple_tags_return_distinct_objects(self):
"""
Tagged items use a generic relationship, which results in duplicate rows being returned when queried.

View File

@ -9,7 +9,7 @@ from django.urls import reverse
from django.utils.translation import gettext as _
from django.views.generic import View
from core.choices import JobStatusChoices, ManagedFileRootPathChoices
from core.choices import ManagedFileRootPathChoices
from core.forms import ManagedFileForm
from core.models import Job
from core.tables import JobTable
@ -24,7 +24,6 @@ from utilities.templatetags.builtins.filters import render_markdown
from utilities.utils import copy_safe_request, count_related, get_viewname, normalize_querydict, shallow_compare_dict
from utilities.views import ContentTypePermissionRequiredMixin, register_model_view
from . import filtersets, forms, tables
from .forms.reports import ReportForm
from .models import *
from .scripts import run_script
@ -1051,19 +1050,11 @@ class ScriptView(ContentTypePermissionRequiredMixin, View):
def get(self, request, module, name):
module = get_script_module(module, request)
script = module.scripts[name]()
jobs = module.get_jobs(script.class_name)
form = script.as_form(initial=normalize_querydict(request.GET))
# Look for a pending Job (use the latest one by creation timestamp)
object_type = ContentType.objects.get(app_label='extras', model='scriptmodule')
script.result = Job.objects.filter(
object_type=object_type,
object_id=module.pk,
name=script.name,
).exclude(
status__in=JobStatusChoices.TERMINAL_STATE_CHOICES
).first()
return render(request, 'extras/script.html', {
'job_count': jobs.count(),
'module': module,
'script': script,
'form': form,
@ -1075,6 +1066,7 @@ class ScriptView(ContentTypePermissionRequiredMixin, View):
module = get_script_module(module, request)
script = module.scripts[name]()
jobs = module.get_jobs(script.class_name)
form = script.as_form(request.POST, request.FILES)
# Allow execution only if RQ worker process is running
@ -1098,6 +1090,7 @@ class ScriptView(ContentTypePermissionRequiredMixin, View):
return redirect('extras:script_result', job_pk=job.pk)
return render(request, 'extras/script.html', {
'job_count': jobs.count(),
'module': module,
'script': script,
'form': form,
@ -1112,8 +1105,10 @@ class ScriptSourceView(ContentTypePermissionRequiredMixin, View):
def get(self, request, module, name):
module = get_script_module(module, request)
script = module.scripts[name]()
jobs = module.get_jobs(script.class_name)
return render(request, 'extras/script/source.html', {
'job_count': jobs.count(),
'module': module,
'script': script,
'tab': 'source',
@ -1128,13 +1123,7 @@ class ScriptJobsView(ContentTypePermissionRequiredMixin, View):
def get(self, request, module, name):
module = get_script_module(module, request)
script = module.scripts[name]()
object_type = ContentType.objects.get(app_label='extras', model='scriptmodule')
jobs = Job.objects.filter(
object_type=object_type,
object_id=module.pk,
name=script.class_name
)
jobs = module.get_jobs(script.class_name)
jobs_table = JobTable(
data=jobs,
@ -1144,6 +1133,7 @@ class ScriptJobsView(ContentTypePermissionRequiredMixin, View):
jobs_table.configure(request)
return render(request, 'extras/script/jobs.html', {
'job_count': jobs.count(),
'module': module,
'script': script,
'table': jobs_table,

View File

@ -254,7 +254,7 @@ class PrefixBulkEditForm(NetBoxModelBulkEditForm):
mark_utilized = forms.NullBooleanField(
required=False,
widget=BulkEditNullBooleanSelect(),
label=_('Treat as 100% utilized')
label=_('Treat as fully utilized')
)
description = forms.CharField(
label=_('Description'),
@ -298,7 +298,7 @@ class IPRangeBulkEditForm(NetBoxModelBulkEditForm):
mark_utilized = forms.NullBooleanField(
required=False,
widget=BulkEditNullBooleanSelect(),
label=_('Treat as 100% utilized')
label=_('Treat as fully utilized')
)
description = forms.CharField(
label=_('Description'),

View File

@ -240,7 +240,7 @@ class PrefixFilterForm(TenancyFilterForm, NetBoxModelFilterSetForm):
)
mark_utilized = forms.NullBooleanField(
required=False,
label=_('Marked as 100% utilized'),
label=_('Treat as fully utilized'),
widget=forms.Select(
choices=BOOLEAN_WITH_BLANK_CHOICES
)
@ -279,7 +279,7 @@ class IPRangeFilterForm(TenancyFilterForm, NetBoxModelFilterSetForm):
)
mark_utilized = forms.NullBooleanField(
required=False,
label=_('Marked as 100% utilized'),
label=_('Treat as fully utilized'),
widget=forms.Select(
choices=BOOLEAN_WITH_BLANK_CHOICES
)

View File

@ -214,7 +214,7 @@ class PrefixForm(TenancyForm, NetBoxModelForm):
required=False,
selector=True,
query_params={
'site_id': '$site',
'available_at_site': '$site',
},
label=_('VLAN'),
)

View File

@ -268,7 +268,7 @@ class Prefix(GetAvailablePrefixesMixin, PrimaryModel):
mark_utilized = models.BooleanField(
verbose_name=_('mark utilized'),
default=False,
help_text=_("Treat as 100% utilized")
help_text=_("Treat as fully utilized")
)
# Cached depth & child counts
@ -427,10 +427,10 @@ class Prefix(GetAvailablePrefixesMixin, PrimaryModel):
prefix = netaddr.IPSet(self.prefix)
child_ips = netaddr.IPSet([ip.address.ip for ip in self.get_child_ips()])
child_ranges = netaddr.IPSet()
child_ranges = []
for iprange in self.get_child_ranges():
child_ranges.add(iprange.range)
available_ips = prefix - child_ips - child_ranges
child_ranges.append(iprange.range)
available_ips = prefix - child_ips - netaddr.IPSet(child_ranges)
# IPv6 /127's, pool, or IPv4 /31-/32 sets are fully usable
if (self.family == 6 and self.prefix.prefixlen >= 127) or self.is_pool or (self.family == 4 and self.prefix.prefixlen >= 31):
@ -535,7 +535,7 @@ class IPRange(PrimaryModel):
mark_utilized = models.BooleanField(
verbose_name=_('mark utilized'),
default=False,
help_text=_("Treat as 100% utilized")
help_text=_("Treat as fully utilized")
)
clone_fields = (

View File

@ -604,7 +604,7 @@ class PrefixIPAddressesView(generic.ObjectChildrenView):
return parent.get_child_ips().restrict(request.user, 'view').prefetch_related('vrf', 'tenant', 'tenant__group')
def prep_table_data(self, request, queryset, parent):
if not get_table_ordering(request, self.table):
if not request.GET.get('q') and not get_table_ordering(request, self.table):
return add_available_ipaddresses(parent.prefix, queryset, parent.is_pool)
return queryset
@ -1068,6 +1068,12 @@ class FHRPGroupAssignmentEditView(generic.ObjectEditView):
instance.interface = get_object_or_404(content_type.model_class(), pk=request.GET.get('interface_id'))
return instance
def get_extra_addanother_params(self, request):
return {
'interface_type': request.GET.get('interface_type'),
'interface_id': request.GET.get('interface_id'),
}
@register_model_view(FHRPGroupAssignment, 'delete')
class FHRPGroupAssignmentDeleteView(generic.ObjectDeleteView):

View File

@ -39,6 +39,8 @@ REDIS = {
SECRET_KEY = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
DJANGO_ADMIN_ENABLED = True
DEFAULT_PERMISSIONS = {}
LOGGING = {

View File

@ -36,3 +36,7 @@ DEFAULT_ACTION_PERMISSIONS = {
'bulk_edit': {'change'},
'bulk_delete': {'delete'},
}
# General-purpose tokens
CENSOR_TOKEN = '********'
CENSOR_TOKEN_CHANGED = '***CHANGED***'

View File

@ -371,19 +371,19 @@ ADMIN_MENU = Menu(
items=(
# Proxy model for auth.User
MenuItem(
link=f'users:netboxuser_list',
link=f'users:user_list',
link_text=_('Users'),
permissions=[f'auth.view_user'],
staff_only=True,
buttons=(
MenuItemButton(
link=f'users:netboxuser_add',
link=f'users:user_add',
title='Add',
icon_class='mdi mdi-plus-thick',
permissions=[f'auth.add_user']
),
MenuItemButton(
link=f'users:netboxuser_import',
link=f'users:user_import',
title='Import',
icon_class='mdi mdi-upload',
permissions=[f'auth.add_user']
@ -445,13 +445,18 @@ ADMIN_MENU = Menu(
),
),
MenuGroup(
label=_('Plugins'),
label=_('System'),
items=(
MenuItem(
link='core:plugin_list',
link_text=_('Plugins'),
staff_only=True
),
MenuItem(
link='core:background_queue_list',
link_text=_('Background Tasks'),
staff_only=True
),
),
),
),

View File

@ -29,7 +29,7 @@ from netbox.plugins import PluginConfig
# Environment setup
#
VERSION = '3.7-beta1'
VERSION = '3.7.3-dev'
# Hostname
HOSTNAME = platform.node()
@ -115,6 +115,7 @@ DEFAULT_PERMISSIONS = getattr(configuration, 'DEFAULT_PERMISSIONS', {
'users.delete_token': ({'user': '$user'},),
})
DEVELOPER = getattr(configuration, 'DEVELOPER', False)
DJANGO_ADMIN_ENABLED = getattr(configuration, 'DJANGO_ADMIN_ENABLED', False)
DOCS_ROOT = getattr(configuration, 'DOCS_ROOT', os.path.join(os.path.dirname(BASE_DIR), 'docs'))
EMAIL = getattr(configuration, 'EMAIL', {})
EVENTS_PIPELINE = getattr(configuration, 'EVENTS_PIPELINE', (
@ -123,7 +124,6 @@ EVENTS_PIPELINE = getattr(configuration, 'EVENTS_PIPELINE', (
EXEMPT_VIEW_PERMISSIONS = getattr(configuration, 'EXEMPT_VIEW_PERMISSIONS', [])
FIELD_CHOICES = getattr(configuration, 'FIELD_CHOICES', {})
FILE_UPLOAD_MAX_MEMORY_SIZE = getattr(configuration, 'FILE_UPLOAD_MAX_MEMORY_SIZE', 2621440)
GIT_PATH = getattr(configuration, 'GIT_PATH', 'git')
HTTP_PROXIES = getattr(configuration, 'HTTP_PROXIES', None)
INTERNAL_IPS = getattr(configuration, 'INTERNAL_IPS', ('127.0.0.1', '::1'))
JINJA2_FILTERS = getattr(configuration, 'JINJA2_FILTERS', {})
@ -355,7 +355,6 @@ SERVER_EMAIL = EMAIL.get('FROM_EMAIL')
#
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
@ -393,6 +392,9 @@ INSTALLED_APPS = [
'drf_spectacular_sidecar',
]
if DJANGO_ADMIN_ENABLED:
INSTALLED_APPS.insert(0, 'django.contrib.admin')
# Middleware
MIDDLEWARE = [
'graphiql_debug_toolbar.middleware.DebugToolbarMiddleware',
@ -452,6 +454,8 @@ AUTHENTICATION_BACKENDS = [
'netbox.authentication.ObjectPermissionBackend',
]
AUTH_USER_MODEL = 'users.User'
# Time zones
USE_TZ = True
@ -592,6 +596,8 @@ for param in dir(configuration):
SOCIAL_AUTH_JSONFIELD_ENABLED = True
SOCIAL_AUTH_CLEAN_USERNAME_FUNCTION = 'users.utils.clean_username'
SOCIAL_AUTH_USER_MODEL = AUTH_USER_MODEL
#
# Django Prometheus
#
@ -729,8 +735,10 @@ LANGUAGES = (
('en', _('English')),
('es', _('Spanish')),
('fr', _('French')),
('ja', _('Japanese')),
('pt', _('Portuguese')),
('ru', _('Russian')),
('tr', _('Turkish')),
)
LOCALE_PATHS = (

View File

@ -12,7 +12,7 @@ class Migration(migrations.Migration):
migrations.CreateModel(
name='DummyModel',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False)),
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False)),
('name', models.CharField(max_length=20)),
('number', models.IntegerField(default=100)),
],

View File

@ -4,23 +4,23 @@ from netbox.plugins.navigation import PluginMenu, PluginMenuButton, PluginMenuIt
items = (
PluginMenuItem(
link='plugins:dummy_plugin:dummy_models',
link='plugins:dummy_plugin:dummy_model_list',
link_text='Item 1',
buttons=(
PluginMenuButton(
link='admin:dummy_plugin_dummymodel_add',
title='Add a new dummy model',
link='plugins:dummy_plugin:dummy_model_add',
title='Button 1',
icon_class='mdi mdi-plus-thick',
),
PluginMenuButton(
link='admin:dummy_plugin_dummymodel_add',
title='Add a new dummy model',
link='plugins:dummy_plugin:dummy_model_add',
title='Button 2',
icon_class='mdi mdi-plus-thick',
),
)
),
PluginMenuItem(
link='plugins:dummy_plugin:dummy_models',
link='plugins:dummy_plugin:dummy_model_list',
link_text='Item 2',
),
)

View File

@ -4,5 +4,6 @@ from . import views
urlpatterns = (
path('models/', views.DummyModelsView.as_view(), name='dummy_models'),
path('models/', views.DummyModelsView.as_view(), name='dummy_model_list'),
path('models/add/', views.DummyModelAddView.as_view(), name='dummy_model_add'),
)

View File

@ -1,3 +1,6 @@
import random
import string
from django.http import HttpResponse
from django.views.generic import View
@ -15,6 +18,20 @@ class DummyModelsView(View):
return HttpResponse(f"Instances: {instance_count}")
class DummyModelAddView(View):
def get(self, request):
return HttpResponse(f"Create an instance")
def post(self, request):
instance = DummyModel(
name=''.join(random.choices(string.ascii_lowercase, k=8)),
number=random.randint(1, 100000)
)
instance.save()
return HttpResponse(f"Instance created")
@register_model_view(Site, 'extra', path='other-stuff')
class ExtraCoreModelView(View):

View File

@ -41,7 +41,7 @@ class PluginTest(TestCase):
def test_views(self):
# Test URL resolution
url = reverse('plugins:dummy_plugin:dummy_models')
url = reverse('plugins:dummy_plugin:dummy_model_list')
self.assertEqual(url, '/plugins/dummy-plugin/models/')
# Test GET request

View File

@ -11,7 +11,6 @@ from netbox.graphql.schema import schema
from netbox.graphql.views import GraphQLView
from netbox.plugins.urls import plugin_patterns, plugin_api_patterns
from netbox.views import HomeView, StaticMediaFailureView, SearchView, htmx
from .admin import admin_site
_patterns = [
@ -70,27 +69,25 @@ _patterns = [
# Plugins
path('plugins/', include((plugin_patterns, 'plugins'))),
path('api/plugins/', include((plugin_api_patterns, 'plugins-api'))),
# Admin
path('admin/background-tasks/', include('django_rq.urls')),
path('admin/', admin_site.urls),
]
# Django admin UI
if settings.DJANGO_ADMIN_ENABLED:
from .admin import admin_site
_patterns.append(path('admin/', admin_site.urls))
# django-debug-toolbar
if settings.DEBUG:
import debug_toolbar
_patterns += [
path('__debug__/', include(debug_toolbar.urls)),
]
_patterns.append(path('__debug__/', include(debug_toolbar.urls)))
# Prometheus metrics
if settings.METRICS_ENABLED:
_patterns += [
path('', include('django_prometheus.urls')),
]
_patterns.append(path('', include('django_prometheus.urls')))
# Prepend BASE_PATH
urlpatterns = [
path('{}'.format(settings.BASE_PATH), include(_patterns))
path(settings.BASE_PATH, include(_patterns))
]
handler404 = 'netbox.views.errors.handler_404'

View File

@ -2,14 +2,17 @@ import re
from collections import namedtuple
from django.conf import settings
from django.contrib import messages
from django.contrib.contenttypes.models import ContentType
from django.core.cache import cache
from django.shortcuts import redirect, render
from django.utils.translation import gettext_lazy as _
from django.views.generic import View
from django_tables2 import RequestConfig
from packaging import version
from extras.dashboard.utils import get_dashboard
from extras.constants import DEFAULT_DASHBOARD
from extras.dashboard.utils import get_dashboard, get_default_dashboard
from netbox.forms import SearchForm
from netbox.search import LookupTypes
from netbox.search.backends import search_backend
@ -32,7 +35,13 @@ class HomeView(View):
return redirect('login')
# Construct the user's custom dashboard layout
dashboard = get_dashboard(request.user).get_layout()
try:
dashboard = get_dashboard(request.user).get_layout()
except Exception:
messages.error(request, _(
"There was an error loading the dashboard configuration. A default dashboard is in use."
))
dashboard = get_default_dashboard(config=DEFAULT_DASHBOARD).get_layout()
# Check whether a new release is available. (Only for staff/superusers.)
new_release = None

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -7,12 +7,12 @@ import { isTruthy } from './util';
*/
function quickSearchEventHandler(event: Event): void {
const quicksearch = event.currentTarget as HTMLInputElement;
const inputgroup = quicksearch.parentElement as HTMLDivElement;
if (isTruthy(inputgroup)) {
const clearbtn = document.getElementById("quicksearch_clear") as HTMLAnchorElement;
if (isTruthy(clearbtn)) {
if (quicksearch.value === "") {
inputgroup.classList.add("hide-last-child");
clearbtn.classList.add("d-none");
} else {
inputgroup.classList.remove("hide-last-child");
clearbtn.classList.remove("d-none");
}
}
}
@ -22,7 +22,7 @@ function quickSearchEventHandler(event: Event): void {
*/
export function initQuickSearch(): void {
const quicksearch = document.getElementById("quicksearch") as HTMLInputElement;
const clearbtn = document.getElementById("quicksearch_clear") as HTMLButtonElement;
const clearbtn = document.getElementById("quicksearch_clear") as HTMLAnchorElement;
if (isTruthy(quicksearch)) {
quicksearch.addEventListener("keyup", quickSearchEventHandler, {
passive: true

View File

@ -18,3 +18,6 @@ $table-cell-padding-y: 0.5rem;
// Fix Tabler bug #1694 in 1.0.0-beta20
$hover-bg: rgba(var(--tblr-secondary-rgb), 0.08);
// Ensure active nav-pill has a background color in dark mode
$nav-pills-link-active-bg: rgba(var(--tblr-secondary-rgb), 0.15);

View File

@ -0,0 +1,38 @@
// Rendered Markdown
.rendered-markdown {
table {
width: 100%;
// Apply a border
th {
border-bottom: 2px solid #dddddd;
padding: 8px;
}
td {
border-top: 1px solid #dddddd;
padding: 8px;
}
// Map "align" attr of column headings
th[align="left"] {
text-align: left;
}
th[align="center"] {
text-align: center;
}
th[align="right"] {
text-align: right;
}
}
}
// Markdown preview
.markdown-widget {
.preview {
border: 1px solid $border-color;
border-radius: $border-radius;
min-height: 200px;
}
}

View File

@ -10,36 +10,6 @@ span.color-label {
border-radius: $border-radius;
}
// Rendered Markdown
.rendered-markdown {
table {
width: 100%;
// Apply a border
th {
border-bottom: 2px solid #dddddd;
padding: 8px;
}
td {
border-top: 1px solid #dddddd;
padding: 8px;
}
// Map "align" attr of column headings
th[align="left"] {
text-align: left;
}
th[align="center"] {
text-align: center;
}
th[align="right"] {
text-align: right;
}
}
}
// Object hierarchy depth indicators
.record-depth {
display: inline;

View File

@ -19,4 +19,5 @@
// Custom styling
@import 'custom/code';
@import 'custom/markdown';
@import 'custom/misc';

View File

@ -38,7 +38,7 @@
<td>{% checkmark request.user.is_superuser %}</td>
</tr>
<tr>
<th scope="row">{% trans "Admin Access" %}</th>
<th scope="row">{% trans "Staff" %}</th>
<td>{% checkmark request.user.is_staff %}</td>
</tr>
</table>

View File

@ -1,25 +0,0 @@
{% extends "admin/index.html" %}
{% load i18n %}
{% block content_title %}{% endblock %}
{% block sidebar %}
{{ block.super }}
<div class="module">
<table style="width: 100%">
<caption>{% trans "System" %}</caption>
<tbody>
<tr>
<th>
<a href="{% url 'rq_home' %}">{% trans "Background Tasks" %}</a>
</th>
</tr>
<tr>
<th>
<a href="{% url 'plugins_list' %}">{% trans "Installed plugins" %}</a>
</th>
</tr>
</tbody>
</table>
</div>
{% endblock %}

View File

@ -15,7 +15,7 @@
{% with providernetwork_tab_active=form.initial.provider_network %}
<div class="row">
<div class="col-9 offset-3">
<ul class="nav nav-pills" role="tablist">
<ul class="nav nav-pills mb-1" role="tablist">
<li class="nav-item" role="presentation">
<button class="nav-link{% if not providernetwork_tab_active %} active{% endif %}" role="tab" type="button" data-bs-target="#site" data-bs-toggle="tab">{% trans "Site" %}</button>
</li>

View File

@ -49,18 +49,13 @@
<div class="col col-md-12">
<div class="card">
<h5 class="card-header">{% trans "Provider Accounts" %}</h5>
<div class="card-body htmx-container table-responsive p-0"
hx-get="{% url 'circuits:provideraccount_list' %}?provider_id={{ object.pk }}"
hx-trigger="load"
></div>
{% htmx_table 'circuits:provideraccount_list' provider_id=object.pk %}
</div>
</div>
<div class="col col-md-12">
<div class="card">
<h5 class="card-header">{% trans "Circuits" %}</h5>
<div class="card-body htmx-container table-responsive p-0"
hx-get="{% url 'circuits:circuit_list' %}?provider_id={{ object.pk }}"
hx-trigger="load"
></div>
{% htmx_table 'circuits:circuit_list' provider_id=object.pk %}
</div>
{% plugin_full_width_page object %}
</div>

View File

@ -42,10 +42,7 @@
<div class="col col-md-12">
<div class="card">
<h5 class="card-header">{% trans "Circuits" %}</h5>
<div class="card-body htmx-container table-responsive p-0"
hx-get="{% url 'circuits:circuit_list' %}?provider_account_id={{ object.pk }}"
hx-trigger="load"
></div>
{% htmx_table 'circuits:circuit_list' provider_account_id=object.pk %}
</div>
{% plugin_full_width_page object %}
</div>

View File

@ -48,10 +48,7 @@
<div class="col col-md-12">
<div class="card">
<h5 class="card-header">{% trans "Circuits" %}</h5>
<div class="card-body htmx-container table-responsive p-0"
hx-get="{% url 'circuits:circuit_list' %}?provider_network_id={{ object.pk }}"
hx-trigger="load"
></div>
{% htmx_table 'circuits:circuit_list' provider_network_id=object.pk %}
</div>
{% plugin_full_width_page object %}
</div>

View File

@ -45,7 +45,11 @@
<td>{{ param }}</td>
<td>{{ current }}</td>
<td>{{ new }}</td>
<td>{% if current != new %}<img src="{% static 'admin/img/icon-changelink.svg' %}" alt="*" title="{% trans "Changed" %}">{% endif %}</td>
<td>
{% if current != new %}
<i class="mdi mdi-pencil text-warning" title="{% trans "Changed" %}"></i>
{% endif %}
</td>
</tr>
{% endfor %}
</tbody>

Some files were not shown because too many files have changed in this diff Show More