Merge branch 'feature' into feature-ui

This commit is contained in:
Jeremy Stretch 2024-07-31 20:26:31 -04:00
commit a9c53dd3da
77 changed files with 980 additions and 482 deletions

View File

@ -74,6 +74,8 @@ If a default value is specified for a selection field, it must exactly match one
An object or multi-object custom field can be used to refer to a particular NetBox object or objects as the "value" for a custom field. These custom fields must define an `object_type`, which determines the type of object to which custom field instances point. An object or multi-object custom field can be used to refer to a particular NetBox object or objects as the "value" for a custom field. These custom fields must define an `object_type`, which determines the type of object to which custom field instances point.
By default, an object choice field will make all objects of that type available for selection in the drop-down. The list choices can be filtered to show only objects with certain values by providing a `query_params` dict in the Related Object Filter field, as a JSON value. More information about `query_params` can be found [here](./custom-scripts.md#objectvar).
## Custom Fields in Templates ## Custom Fields in Templates
Several features within NetBox, such as export templates and webhooks, utilize Jinja2 templating. For convenience, objects which support custom field assignment expose custom field data through the `cf` property. This is a bit cleaner than accessing custom field data through the actual field (`custom_field_data`). Several features within NetBox, such as export templates and webhooks, utilize Jinja2 templating. For convenience, objects which support custom field assignment expose custom field data through the `cf` property. This is a bit cleaner than accessing custom field data through the actual field (`custom_field_data`).

View File

@ -86,8 +86,6 @@ CUSTOM_VALIDATORS = {
#### Referencing Related Object Attributes #### Referencing Related Object Attributes
!!! info "This feature was introduced in NetBox v4.0."
The attributes of a related object can be referenced by specifying a dotted path. For example, to reference the name of a region to which a site is assigned, use `region.name`: The attributes of a related object can be referenced by specifying a dotted path. For example, to reference the name of a region to which a site is assigned, use `region.name`:
```python ```python
@ -104,8 +102,6 @@ CUSTOM_VALIDATORS = {
#### Validating Request Parameters #### Validating Request Parameters
!!! info "This feature was introduced in NetBox v4.0."
In addition to validating object attributes, custom validators can also match against parameters of the current request (where available). For example, the following rule will permit only the user named "admin" to modify an object: In addition to validating object attributes, custom validators can also match against parameters of the current request (where available). For example, the following rule will permit only the user named "admin" to modify an object:
```json ```json

View File

@ -18,7 +18,7 @@ Depending on its classification, each NetBox model may support various features
| [Custom links](../customization/custom-links.md) | `CustomLinksMixin` | `custom_links` | These models support the assignment of custom links | | [Custom links](../customization/custom-links.md) | `CustomLinksMixin` | `custom_links` | These models support the assignment of custom links |
| [Custom validation](../customization/custom-validation.md) | `CustomValidationMixin` | - | Supports the enforcement of custom validation rules | | [Custom validation](../customization/custom-validation.md) | `CustomValidationMixin` | - | Supports the enforcement of custom validation rules |
| [Export templates](../customization/export-templates.md) | `ExportTemplatesMixin` | `export_templates` | Users can create custom export templates for these models | | [Export templates](../customization/export-templates.md) | `ExportTemplatesMixin` | `export_templates` | Users can create custom export templates for these models |
| [Job results](../features/background-jobs.md) | `JobsMixin` | `jobs` | Users can create custom export templates for these models | | [Job results](../features/background-jobs.md) | `JobsMixin` | `jobs` | Background jobs can be scheduled for these models |
| [Journaling](../features/journaling.md) | `JournalingMixin` | `journaling` | These models support persistent historical commentary | | [Journaling](../features/journaling.md) | `JournalingMixin` | `journaling` | These models support persistent historical commentary |
| [Synchronized data](../integrations/synchronized-data.md) | `SyncedDataMixin` | `synced_data` | Certain model data can be automatically synchronized from a remote data source | | [Synchronized data](../integrations/synchronized-data.md) | `SyncedDataMixin` | `synced_data` | Certain model data can be automatically synchronized from a remote data source |
| [Tagging](../models/extras/tag.md) | `TagsMixin` | `tags` | The models can be tagged with user-defined tags | | [Tagging](../models/extras/tag.md) | `TagsMixin` | `tags` | The models can be tagged with user-defined tags |

View File

@ -1,9 +1,10 @@
# Event Rules # Event Rules
NetBox includes the ability to execute certain functions in response to internal object changes. These include: NetBox includes the ability to automatically perform certain functions in response to internal events. These include:
* [Scripts](../customization/custom-scripts.md) execution * Executing a [custom script](../customization/custom-scripts.md)
* [Webhooks](../integrations/webhooks.md) execution * Sending a [webhook](../integrations/webhooks.md)
* Generating [user notifications](../features/notifications.md)
For example, suppose you want to automatically configure a monitoring system to start monitoring a device when its operational status is changed to active, and remove it from monitoring for any other status. You can create a webhook in NetBox for the device model and craft its content and destination URL to effect the desired change on the receiving system. You can then associate an event rule with this webhook and the webhook will be sent automatically by NetBox whenever the configured constraints are met. For example, suppose you want to automatically configure a monitoring system to start monitoring a device when its operational status is changed to active, and remove it from monitoring for any other status. You can create a webhook in NetBox for the device model and craft its content and destination URL to effect the desired change on the receiving system. You can then associate an event rule with this webhook and the webhook will be sent automatically by NetBox whenever the configured constraints are met.

View File

@ -0,0 +1,10 @@
# Notifications
!!! info "This feature was introduced in NetBox v4.1."
NetBox includes a system for generating user notifications, which can be marked as read or deleted by individual users. There are two built-in mechanisms for generating a notification:
* A user can subscribe to an object. When that object is modified, a notification is created to inform the user of the change.
* An [event rule](./event-rules.md) can be defined to automatically generate a notification for one or more users in response to specific system events.
Additionally, NetBox plugins can generate notifications for their own purposes.

View File

@ -1,5 +1,7 @@
# Circuit Groups # Circuit Groups
!!! info "This feature was introduced in NetBox v4.1."
[Circuits](./circuit.md) can be arranged into administrative groups for organization. The assignment of a circuit to a group is optional. [Circuits](./circuit.md) can be arranged into administrative groups for organization. The assignment of a circuit to a group is optional.
## Fields ## Fields

View File

@ -42,4 +42,6 @@ The numeric weight of the module, including a unit designation (e.g. 3 kilograms
### Airflow ### Airflow
!!! info "The `airflow` field was introduced in NetBox v4.1."
The direction in which air circulates through the device chassis for cooling. The direction in which air circulates through the device chassis for cooling.

View File

@ -1,5 +1,7 @@
# Rack Types # Rack Types
!!! info "This feature was introduced in NetBox v4.1."
A rack type defines the physical characteristics of a particular model of [rack](./rack.md). A rack type defines the physical characteristics of a particular model of [rack](./rack.md).
## Fields ## Fields

View File

@ -42,6 +42,15 @@ The type of data this field holds. This must be one of the following:
For object and multiple-object fields only. Designates the type of NetBox object being referenced. For object and multiple-object fields only. Designates the type of NetBox object being referenced.
### Related Object Filter
!!! info "This field was introduced in NetBox v4.1."
For object and multi-object custom fields, a filter may be defined to limit the available objects when populating a field value. This filter maps object attributes to values. For example, `{"status": "active"}` will include only objects with a status of "active."
!!! warning
This setting is employed for convenience only, and should not be relied upon to enforce data integrity.
### Weight ### Weight
A numeric weight used to override alphabetic ordering of fields by name. Custom fields with a lower weight will be listed before those with a higher weight. (Note that weight applies within the context of a custom field group, if defined.) A numeric weight used to override alphabetic ordering of fields by name. Custom fields with a lower weight will be listed before those with a higher weight. (Note that weight applies within the context of a custom field group, if defined.)

View File

@ -18,17 +18,22 @@ The type(s) of object in NetBox that will trigger the rule.
If not selected, the event rule will not be processed. If not selected, the event rule will not be processed.
### Events ### Events Types
The events which will trigger the rule. At least one event type must be selected. The event types which will trigger the rule. At least one event type must be selected.
| Name | Description | | Name | Description |
|------------|--------------------------------------| |----------------|---------------------------------------------|
| Creations | A new object has been created | | Object created | A new object has been created |
| Updates | An existing object has been modified | | Object updated | An existing object has been modified |
| Deletions | An object has been deleted | | Object deleted | An object has been deleted |
| Job starts | A job for an object starts | | Job started | A background job is initiated |
| Job ends | A job for an object terminates | | Job completed | A background job completes successfully |
| Job failed | A background job fails |
| Job errored | A background job is aborted due to an error |
!!! tip "Custom Event Types"
The above list includes only built-in event types. NetBox plugins can also register their own custom event types.
### Conditions ### Conditions

View File

@ -16,6 +16,8 @@ A unique URL-friendly identifier. (This value can be used for filtering.)
### VLAN ID Ranges ### VLAN ID Ranges
!!! info "This field replaced the legacy `min_vid` and `max_vid` fields in NetBox v4.1."
The set of VLAN IDs which are encompassed by the group. By default, this will be the entire range of valid IEEE 802.1Q VLAN IDs (1 to 4094, inclusive). VLANs created within a group must have a VID that falls within one of these ranges. Ranges may not overlap. The set of VLAN IDs which are encompassed by the group. By default, this will be the entire range of valid IEEE 802.1Q VLAN IDs (1 to 4094, inclusive). VLANs created within a group must have a VID that falls within one of these ranges. Ranges may not overlap.
### Scope ### Scope

View File

@ -50,9 +50,13 @@ The amount of running memory provisioned, in megabytes.
### Disk ### Disk
The amount of disk storage provisioned, in gigabytes. The amount of disk storage provisioned, in megabytes.
!!! warning
This field may be directly modified only on virtual machines which do not define discrete [virtual disks](./virtualdisk.md). Otherwise, it will report the sum of all attached disks.
### Serial Number ### Serial Number
Optional serial number assigned to this VM. !!! info "This field was introduced in NetBox v4.1."
Optional serial number assigned to this virtual machine. Unlike devices, uniqueness is not enforced for virtual machine serial numbers.

View File

@ -20,6 +20,12 @@ The operational status of the link. Options include:
The service set identifier (SSID) for the wireless link (optional). The service set identifier (SSID) for the wireless link (optional).
### Distance
!!! info "This field was introduced in NetBox v4.1."
The distance between the link's two endpoints, including a unit designation (e.g. 100 meters or 25 feet).
### Authentication Type ### Authentication Type
The type of wireless authentication in use. Options include: The type of wireless authentication in use. Options include:
@ -40,7 +46,3 @@ The security cipher used to apply wireless authentication. Options include:
### Pre-Shared Key ### Pre-Shared Key
The security key configured on each client to grant access to the secured wireless LAN. This applies only to certain authentication types. The security key configured on each client to grant access to the secured wireless LAN. This applies only to certain authentication types.
### Distance
The numeric distance of the link, including a unit designation (e.g. 100 meters or 25 feet).

View File

@ -0,0 +1,99 @@
# Background Jobs
!!! info "This feature was introduced in NetBox v4.1."
NetBox plugins can defer certain operations by enqueuing [background jobs](../../features/background-jobs.md), which are executed asynchronously by background workers. This is helpful for decoupling long-running processes from the user-facing request-response cycle.
For example, your plugin might need to fetch data from a remote system. Depending on the amount of data and the responsiveness of the remote server, this could take a few minutes. Deferring this task to a queued job ensures that it can be completed in the background, without interrupting the user. The data it fetches can be made available once the job has completed.
## Job Runners
A background job implements a basic [Job](../../models/core/job.md) executor for all kinds of tasks. It has logic implemented to handle the management of the associated job object, rescheduling of periodic jobs in the given interval and error handling. Adding custom jobs is done by subclassing NetBox's `JobRunner` class.
::: utilities.jobs.JobRunner
#### Example
```python title="jobs.py"
from utilities.jobs import JobRunner
class MyTestJob(JobRunner):
class Meta:
name = "My Test Job"
def run(self, *args, **kwargs):
obj = self.job.object
# your logic goes here
```
You can schedule the background job from within your code (e.g. from a model's `save()` method or a view) by calling `MyTestJob.enqueue()`. This method passes through all arguments to `Job.enqueue()`. However, no `name` argument must be passed, as the background job name will be used instead.
### Attributes
`JobRunner` attributes are defined under a class named `Meta` within the job. These are optional, but encouraged.
#### `name`
This is the human-friendly names of your background job. If omitted, the class name will be used.
### Scheduled Jobs
As described above, jobs can be scheduled for immediate execution or at any later time using the `enqueue()` method. However, for management purposes, the `enqueue_once()` method allows a job to be scheduled exactly once avoiding duplicates. If a job is already scheduled for a particular instance, a second one won't be scheduled, respecting thread safety. An example use case would be to schedule a periodic task that is bound to an instance in general, but not to any event of that instance (such as updates). The parameters of the `enqueue_once()` method are identical to those of `enqueue()`.
!!! tip
It is not forbidden to `enqueue()` additional jobs while an interval schedule is active. An example use of this would be to schedule a periodic daily synchronization, but also trigger additional synchronizations on demand when the user presses a button.
#### Example
```python title="jobs.py"
from utilities.jobs import JobRunner
class MyHousekeepingJob(JobRunner):
class Meta:
name = "Housekeeping"
def run(self, *args, **kwargs):
# your logic goes here
```
```python title="__init__.py"
from netbox.plugins import PluginConfig
class MyPluginConfig(PluginConfig):
def ready(self):
from .jobs import MyHousekeepingJob
MyHousekeepingJob.setup(interval=60)
```
## Task queues
Three task queues of differing priority are defined by default:
* High
* Default
* Low
Any tasks in the "high" queue are completed before the default queue is checked, and any tasks in the default queue are completed before those in the "low" queue.
Plugins can also add custom queues for their own needs by setting the `queues` attribute under the PluginConfig class. An example is included below:
```python
class MyPluginConfig(PluginConfig):
name = 'myplugin'
...
queues = [
'foo',
'bar',
]
```
The `PluginConfig` above creates two custom queues with the following names `my_plugin.foo` and `my_plugin.bar`. (The plugin's name is prepended to each queue to avoid conflicts between plugins.)
!!! warning "Configuring the RQ worker process"
By default, NetBox's RQ worker process only services the high, default, and low queues. Plugins which introduce custom queues should advise users to either reconfigure the default worker, or run a dedicated worker specifying the necessary queues. For example:
```
python manage.py rqworker my_plugin.foo my_plugin.bar
```

View File

@ -1,30 +0,0 @@
# Background Tasks
NetBox supports the queuing of tasks that need to be performed in the background, decoupled from the request-response cycle, using the [Python RQ](https://python-rq.org/) library. Three task queues of differing priority are defined by default:
* High
* Default
* Low
Any tasks in the "high" queue are completed before the default queue is checked, and any tasks in the default queue are completed before those in the "low" queue.
Plugins can also add custom queues for their own needs by setting the `queues` attribute under the PluginConfig class. An example is included below:
```python
class MyPluginConfig(PluginConfig):
name = 'myplugin'
...
queues = [
'foo',
'bar',
]
```
The PluginConfig above creates two custom queues with the following names `my_plugin.foo` and `my_plugin.bar`. (The plugin's name is prepended to each queue to avoid conflicts between plugins.)
!!! warning "Configuring the RQ worker process"
By default, NetBox's RQ worker process only services the high, default, and low queues. Plugins which introduce custom queues should advise users to either reconfigure the default worker, or run a dedicated worker specifying the necessary queues. For example:
```
python manage.py rqworker my_plugin.foo my_plugin.bar
```

View File

@ -1,16 +1,18 @@
# Events # Event Types
Plugins can register their own custom event types for use with NetBox [event rules](../../models/extras/eventrule.md). This is accomplished by calling the `register()` method on an instance of the `Event` class. This can be done anywhere within the plugin. An example is provided below. !!! info "This feature was introduced in NetBox v4.1."
Plugins can register their own custom event types for use with NetBox [event rules](../../models/extras/eventrule.md). This is accomplished by calling the `register()` method on an instance of the `EventType` class. This can be done anywhere within the plugin. An example is provided below.
```python ```python
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
from netbox.events import Event, EVENT_TYPE_SUCCESS from netbox.events import EventType, EVENT_TYPE_KIND_SUCCESS
Event( EventType(
name='ticket_opened', name='ticket_opened',
text=_('Ticket opened'), text=_('Ticket opened'),
type=EVENT_TYPE_SUCCESS kind=EVENT_TYPE_KIND_SUCCESS
).register() ).register()
``` ```
::: netbox.events.Event ::: netbox.events.EventType

View File

@ -47,6 +47,7 @@ project-name/
- __init__.py - __init__.py
- filtersets.py - filtersets.py
- graphql.py - graphql.py
- jobs.py
- models.py - models.py
- middleware.py - middleware.py
- navigation.py - navigation.py

View File

@ -130,6 +130,8 @@ For more information about database migrations, see the [Django documentation](h
::: netbox.models.features.ExportTemplatesMixin ::: netbox.models.features.ExportTemplatesMixin
::: netbox.models.features.JobsMixin
::: netbox.models.features.JournalingMixin ::: netbox.models.features.JournalingMixin
::: netbox.models.features.TagsMixin ::: netbox.models.features.TagsMixin

View File

@ -203,7 +203,7 @@ Plugins can inject custom content into certain areas of core NetBox views. This
| `right_page()` | Object view | Inject content on the right side of the page | | `right_page()` | Object view | Inject content on the right side of the page |
| `full_width_page()` | Object view | Inject content across the entire bottom of the page | | `full_width_page()` | Object view | Inject content across the entire bottom of the page |
!!! info "The `navbar()` method was introduced in NetBox v4.1." !!! info "The `navbar()` and `alerts()` methods were introduced in NetBox v4.1."
Additionally, a `render()` method is available for convenience. This method accepts the name of a template to render, and any additional context data you want to pass. Its use is optional, however. Additionally, a `render()` method is available for convenience. This method accepts the name of a template to render, and any additional context data you want to pass. Its use is optional, however.

View File

@ -5,17 +5,44 @@
### Breaking Changes ### Breaking Changes
* Several filters deprecated in v4.0 have been removed (see [#15410](https://github.com/netbox-community/netbox/issues/15410)). * Several filters deprecated in v4.0 have been removed (see [#15410](https://github.com/netbox-community/netbox/issues/15410)).
* The unit size for virtual disk size has been changed from 1 gigabyte to 1 megabyte. Existing values have been updated accordingly. * The unit size for `VirtualMachine.disk` and `VirtualDisk.size` been changed from 1 gigabyte to 1 megabyte. Existing values have been updated accordingly.
* The `min_vid` and `max_vid` fields on the VLAN group model have been replaced with `vid_ranges`, an array of starting and ending integer pairs.
* The five individual event type fields on the EventRule model have been replaced by a single `event_types` array field, indicating each assigned event type by name.
* The `validate()` method on CustomValidator subclasses now **must** accept the request argument (deprecated in v4.0 by #14279).
### New Features ### New Features
#### Circuit Groups ([#7025](https://github.com/netbox-community/netbox/issues/7025))
Circuits can now be assigned to groups for administrative purposes. Each circuit may be assigned to multiple groups, and each assignment may optionally indicate a priority (primary, secondary, or tertiary).
#### VLAN Group ID Ranges ([#9627](https://github.com/netbox-community/netbox/issues/9627))
The VLAN group model has been enhanced to support multiple VLAN ID (VID) ranges, whereas previously it could track only a single beginning and ending VID. VID ranges are stored as an array of beginning and ending (inclusive) integers.
#### Rack Types ([#12826](https://github.com/netbox-community/netbox/issues/12826))
A new rack type model has been introduced, which functions similar to the device type model. Users can now define a common make and model of rack, the attributes of which are automatically populated when creating a new rack of that type.
#### Plugins Catalog Integration ([#14731](https://github.com/netbox-community/netbox/issues/14731))
The NetBox UI now integrates directly with the canonical plugins catalog hosted by NetBox Labs. In addition to locally installed plugins, users can explore available plugins and check for newer releases.
#### User Notifications ([#15621](https://github.com/netbox-community/netbox/issues/15621))
NetBox now includes a user notification system. Users can subscribe to individual objects and be alerted to changes live within the web interface. Additionally, event rules can now trigger notifications to specific users and/or groups. Plugins can also employ this notification system for their own purposes.
### Enhancements ### Enhancements
* [#7537](https://github.com/netbox-community/netbox/issues/7537) - Add a serial number field for virtual machines * [#7537](https://github.com/netbox-community/netbox/issues/7537) - Add a serial number field for virtual machines
* [#8984](https://github.com/netbox-community/netbox/issues/8984) - Enable filtering of custom script output by log level * [#8984](https://github.com/netbox-community/netbox/issues/8984) - Enable filtering of custom script output by log level
* [#11969](https://github.com/netbox-community/netbox/issues/11969) - Support for tracking airflow on racks and module types
* [#15156](https://github.com/netbox-community/netbox/issues/15156) - Add `display_url` field to all REST API serializers * [#15156](https://github.com/netbox-community/netbox/issues/15156) - Add `display_url` field to all REST API serializers
* [#16359](https://github.com/netbox-community/netbox/issues/16359) - Enable plugins to embed content in the top navigation bar * [#16359](https://github.com/netbox-community/netbox/issues/16359) - Enable plugins to embed content in the top navigation bar
* [#16580](https://github.com/netbox-community/netbox/issues/16580) - Enable individual views to enforce `LOGIN_REQUIRED` selectively (remove `AUTH_EXEMPT_PATHS`) * [#16580](https://github.com/netbox-community/netbox/issues/16580) - Enable individual views to enforce `LOGIN_REQUIRED` selectively (remove `AUTH_EXEMPT_PATHS`)
* [#16776](https://github.com/netbox-community/netbox/issues/16776) - Added an `alerts()` method to `PluginTemplateExtension` for embedding important information about specific objects
* [#16782](https://github.com/netbox-community/netbox/issues/16782) - Enable filtering of selection choices for object type custom fields
* [#16866](https://github.com/netbox-community/netbox/issues/16866) - Introduced a mechanism for plugins to register custom event types (for use with user notifications)
### Plugins ### Plugins
@ -24,13 +51,34 @@
### Other Changes ### Other Changes
* [#14692](https://github.com/netbox-community/netbox/issues/14692) - Change atomic unit for virtual disks from 1GB to 1MB * [#14692](https://github.com/netbox-community/netbox/issues/14692) - Change atomic unit for virtual disks from 1GB to 1MB
* [#14861](https://github.com/netbox-community/netbox/issues/14861) - The URL path for UI views concerning virtual disks has been standardized to `/virtualization/virtual-disks/`
* [#15410](https://github.com/netbox-community/netbox/issues/15410) - Removed various deprecated filters * [#15410](https://github.com/netbox-community/netbox/issues/15410) - Removed various deprecated filters
* [#15908](https://github.com/netbox-community/netbox/issues/15908) - Indicate product edition in release data * [#15908](https://github.com/netbox-community/netbox/issues/15908) - Indicate product edition in release data
* [#16388](https://github.com/netbox-community/netbox/issues/16388) - Move all change logging resources from `extras` to `core` * [#16388](https://github.com/netbox-community/netbox/issues/16388) - Move all change logging resources from `extras` to `core`
* [#16884](https://github.com/netbox-community/netbox/issues/16884) - Remove the ID column from the default table configuration for changelog records
### REST API Changes ### REST API Changes
* The `/api/extras/object-changes/` endpoint has moved to `/api/core/object-changes/` * The `/api/extras/object-changes/` endpoint has moved to `/api/core/object-changes/`
* Added the following endpoints:
* `/api/circuits/circuit-groups/`
* `/api/circuits/circuit-group-assignments/`
* `/api/dcim/rack-types/`
* circuits.Circuit
* Added the `assignments` field, which lists all group assignments
* dcim.ModuleType
* Added the optional `airflow` choice field
* dcim.Rack
* Added the optional `rack_type` foreign key field
* Added the optional `airflow` choice field
* extras.CustomField
* Added the `related_object_filter` JSON field for object and multi-object custom fields
* extras.EventRule
* Removed the `type_create`, `type_update`, `type_delete`, `type_job_start`, and `type_job_end` boolean fields
* Added the `event_types` array field
* ipam.VLANGroup
* Removed the `min_vid` and `max_vid` fields
* Added the `vid_ranges` field, and array of starting & ending VLAN IDs
* virtualization.VirtualMachine * virtualization.VirtualMachine
* Added the optional `serial` field * Added the optional `serial` field
* wireless.WirelessLink * wireless.WirelessLink

View File

@ -86,6 +86,7 @@ nav:
- Change Logging: 'features/change-logging.md' - Change Logging: 'features/change-logging.md'
- Journaling: 'features/journaling.md' - Journaling: 'features/journaling.md'
- Event Rules: 'features/event-rules.md' - Event Rules: 'features/event-rules.md'
- Notifications: 'features/notifications.md'
- Background Jobs: 'features/background-jobs.md' - Background Jobs: 'features/background-jobs.md'
- Auth & Permissions: 'features/authentication-permissions.md' - Auth & Permissions: 'features/authentication-permissions.md'
- API & Integration: 'features/api-integration.md' - API & Integration: 'features/api-integration.md'
@ -142,11 +143,11 @@ nav:
- Forms: 'plugins/development/forms.md' - Forms: 'plugins/development/forms.md'
- Filters & Filter Sets: 'plugins/development/filtersets.md' - Filters & Filter Sets: 'plugins/development/filtersets.md'
- Search: 'plugins/development/search.md' - Search: 'plugins/development/search.md'
- Events: 'plugins/development/events.md' - Event Types: 'plugins/development/event-types.md'
- Data Backends: 'plugins/development/data-backends.md' - Data Backends: 'plugins/development/data-backends.md'
- REST API: 'plugins/development/rest-api.md' - REST API: 'plugins/development/rest-api.md'
- GraphQL API: 'plugins/development/graphql-api.md' - GraphQL API: 'plugins/development/graphql-api.md'
- Background Tasks: 'plugins/development/background-tasks.md' - Background Jobs: 'plugins/development/background-jobs.md'
- Dashboard Widgets: 'plugins/development/dashboard-widgets.md' - Dashboard Widgets: 'plugins/development/dashboard-widgets.md'
- Staged Changes: 'plugins/development/staged-changes.md' - Staged Changes: 'plugins/development/staged-changes.md'
- Exceptions: 'plugins/development/exceptions.md' - Exceptions: 'plugins/development/exceptions.md'

View File

@ -198,6 +198,7 @@ class CircuitGroupAssignmentForm(NetBoxModelForm):
circuit = DynamicModelChoiceField( circuit = DynamicModelChoiceField(
label=_('Circuit'), label=_('Circuit'),
queryset=Circuit.objects.all(), queryset=Circuit.objects.all(),
selector=True
) )
class Meta: class Meta:

View File

@ -78,7 +78,7 @@ class Migration(migrations.Migration):
options={ options={
'verbose_name': 'Circuit group assignment', 'verbose_name': 'Circuit group assignment',
'verbose_name_plural': 'Circuit group assignments', 'verbose_name_plural': 'Circuit group assignments',
'ordering': ('circuit', 'priority', 'pk'), 'ordering': ('group', 'circuit', 'priority', 'pk'),
}, },
), ),
migrations.AddConstraint( migrations.AddConstraint(

View File

@ -203,7 +203,7 @@ class CircuitGroupAssignment(CustomFieldsMixin, ExportTemplatesMixin, TagsMixin,
) )
class Meta: class Meta:
ordering = ('circuit', 'priority', 'pk') ordering = ('group', 'circuit', 'priority', 'pk')
constraints = ( constraints = (
models.UniqueConstraint( models.UniqueConstraint(
fields=('circuit', 'group'), fields=('circuit', 'group'),

View File

@ -77,18 +77,22 @@ class CircuitTable(TenancyColumnsMixin, ContactsColumnMixin, NetBoxTable):
verbose_name=_('Commit Rate') verbose_name=_('Commit Rate')
) )
comments = columns.MarkdownColumn( comments = columns.MarkdownColumn(
verbose_name=_('Comments'), verbose_name=_('Comments')
) )
tags = columns.TagColumn( tags = columns.TagColumn(
url_name='circuits:circuit_list' url_name='circuits:circuit_list'
) )
assignments = columns.ManyToManyColumn(
verbose_name=_('Assignments'),
linkify_item=True
)
class Meta(NetBoxTable.Meta): class Meta(NetBoxTable.Meta):
model = Circuit model = Circuit
fields = ( fields = (
'pk', 'id', 'cid', 'provider', 'provider_account', 'type', 'status', 'tenant', 'tenant_group', 'pk', 'id', 'cid', 'provider', 'provider_account', 'type', 'status', 'tenant', 'tenant_group',
'termination_a', 'termination_z', 'install_date', 'termination_date', 'commit_rate', 'description', 'termination_a', 'termination_z', 'install_date', 'termination_date', 'commit_rate', 'description',
'comments', 'contacts', 'tags', 'created', 'last_updated', 'comments', 'contacts', 'tags', 'created', 'last_updated', 'assignments',
) )
default_columns = ( default_columns = (
'pk', 'cid', 'provider', 'type', 'status', 'tenant', 'termination_a', 'termination_z', 'description', 'pk', 'cid', 'provider', 'type', 'status', 'tenant', 'termination_a', 'termination_z', 'description',

View File

@ -7,6 +7,8 @@ from rest_framework.routers import APIRootView
from rest_framework.viewsets import ReadOnlyModelViewSet from rest_framework.viewsets import ReadOnlyModelViewSet
from core import filtersets from core import filtersets
from core.choices import DataSourceStatusChoices
from core.jobs import SyncDataSourceJob
from core.models import * from core.models import *
from netbox.api.metadata import ContentTypeMetadata from netbox.api.metadata import ContentTypeMetadata
from netbox.api.viewsets import NetBoxModelViewSet, NetBoxReadOnlyModelViewSet from netbox.api.viewsets import NetBoxModelViewSet, NetBoxReadOnlyModelViewSet
@ -36,7 +38,11 @@ class DataSourceViewSet(NetBoxModelViewSet):
if not request.user.has_perm('core.sync_datasource', obj=datasource): if not request.user.has_perm('core.sync_datasource', obj=datasource):
raise PermissionDenied(_("This user does not have permission to synchronize this data source.")) raise PermissionDenied(_("This user does not have permission to synchronize this data source."))
datasource.enqueue_sync_job(request) # Enqueue the sync job & update the DataSource's status
SyncDataSourceJob.enqueue(instance=datasource, user=request.user)
datasource.status = DataSourceStatusChoices.QUEUED
DataSource.objects.filter(pk=datasource.pk).update(status=datasource.status)
serializer = serializers.DataSourceSerializer(datasource, context={'request': request}) serializer = serializers.DataSourceSerializer(datasource, context={'request': request})
return Response(serializer.data) return Response(serializer.data)

View File

@ -59,6 +59,12 @@ class JobStatusChoices(ChoiceSet):
(STATUS_FAILED, _('Failed'), 'red'), (STATUS_FAILED, _('Failed'), 'red'),
) )
ENQUEUED_STATE_CHOICES = (
STATUS_PENDING,
STATUS_SCHEDULED,
STATUS_RUNNING,
)
TERMINAL_STATE_CHOICES = ( TERMINAL_STATE_CHOICES = (
STATUS_COMPLETED, STATUS_COMPLETED,
STATUS_ERRORED, STATUS_ERRORED,

View File

@ -1,6 +1,6 @@
from django.utils.translation import gettext as _ from django.utils.translation import gettext as _
from netbox.events import Event, EVENT_TYPE_DANGER, EVENT_TYPE_SUCCESS, EVENT_TYPE_WARNING from netbox.events import EventType, EVENT_TYPE_KIND_DANGER, EVENT_TYPE_KIND_SUCCESS, EVENT_TYPE_KIND_WARNING
__all__ = ( __all__ = (
'JOB_COMPLETED', 'JOB_COMPLETED',
@ -24,10 +24,10 @@ JOB_FAILED = 'job_failed'
JOB_ERRORED = 'job_errored' JOB_ERRORED = 'job_errored'
# Register core events # Register core events
Event(OBJECT_CREATED, _('Object created')).register() EventType(OBJECT_CREATED, _('Object created')).register()
Event(OBJECT_UPDATED, _('Object updated')).register() EventType(OBJECT_UPDATED, _('Object updated')).register()
Event(OBJECT_DELETED, _('Object deleted')).register() EventType(OBJECT_DELETED, _('Object deleted')).register()
Event(JOB_STARTED, _('Job started')).register() EventType(JOB_STARTED, _('Job started')).register()
Event(JOB_COMPLETED, _('Job completed'), type=EVENT_TYPE_SUCCESS).register() EventType(JOB_COMPLETED, _('Job completed'), kind=EVENT_TYPE_KIND_SUCCESS).register()
Event(JOB_FAILED, _('Job failed'), type=EVENT_TYPE_WARNING).register() EventType(JOB_FAILED, _('Job failed'), kind=EVENT_TYPE_KIND_WARNING).register()
Event(JOB_ERRORED, _('Job errored'), type=EVENT_TYPE_DANGER).register() EventType(JOB_ERRORED, _('Job errored'), kind=EVENT_TYPE_KIND_DANGER).register()

View File

@ -1,33 +1,33 @@
import logging import logging
from netbox.search.backends import search_backend from netbox.search.backends import search_backend
from .choices import * from utilities.jobs import JobRunner
from .choices import DataSourceStatusChoices
from .exceptions import SyncError from .exceptions import SyncError
from .models import DataSource from .models import DataSource
from rq.timeouts import JobTimeoutException
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def sync_datasource(job, *args, **kwargs): class SyncDataSourceJob(JobRunner):
""" """
Call sync() on a DataSource. Call sync() on a DataSource.
""" """
datasource = DataSource.objects.get(pk=job.object_id)
try: class Meta:
job.start() name = 'Synchronization'
datasource.sync()
# Update the search cache for DataFiles belonging to this source def run(self, *args, **kwargs):
search_backend.cache(datasource.datafiles.iterator()) datasource = DataSource.objects.get(pk=self.job.object_id)
job.terminate() try:
datasource.sync()
except Exception as e: # Update the search cache for DataFiles belonging to this source
job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e)) search_backend.cache(datasource.datafiles.iterator())
DataSource.objects.filter(pk=datasource.pk).update(status=DataSourceStatusChoices.FAILED)
if type(e) in (SyncError, JobTimeoutException): except Exception as e:
logging.error(e) DataSource.objects.filter(pk=datasource.pk).update(status=DataSourceStatusChoices.FAILED)
else: if type(e) is SyncError:
logging.error(e)
raise e raise e

View File

@ -0,0 +1,24 @@
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('core', '0011_move_objectchange'),
]
operations = [
migrations.AlterField(
model_name='job',
name='object_type',
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name='jobs',
to='contenttypes.contenttype'
),
),
]

View File

@ -1,10 +1,10 @@
import hashlib import hashlib
import logging import logging
import os import os
import yaml
from fnmatch import fnmatchcase from fnmatch import fnmatchcase
from urllib.parse import urlparse from urllib.parse import urlparse
import yaml
from django.conf import settings from django.conf import settings
from django.contrib.contenttypes.fields import GenericForeignKey from django.contrib.contenttypes.fields import GenericForeignKey
from django.core.exceptions import ValidationError from django.core.exceptions import ValidationError
@ -12,7 +12,6 @@ from django.core.validators import RegexValidator
from django.db import models from django.db import models
from django.urls import reverse from django.urls import reverse
from django.utils import timezone from django.utils import timezone
from django.utils.module_loading import import_string
from django.utils.translation import gettext as _ from django.utils.translation import gettext as _
from netbox.constants import CENSOR_TOKEN, CENSOR_TOKEN_CHANGED from netbox.constants import CENSOR_TOKEN, CENSOR_TOKEN_CHANGED
@ -23,7 +22,6 @@ from utilities.querysets import RestrictedQuerySet
from ..choices import * from ..choices import *
from ..exceptions import SyncError from ..exceptions import SyncError
from ..signals import post_sync, pre_sync from ..signals import post_sync, pre_sync
from .jobs import Job
__all__ = ( __all__ = (
'AutoSyncRecord', 'AutoSyncRecord',
@ -153,21 +151,6 @@ class DataSource(JobsMixin, PrimaryModel):
return objectchange return objectchange
def enqueue_sync_job(self, request):
"""
Enqueue a background job to synchronize the DataSource by calling sync().
"""
# Set the status to "syncing"
self.status = DataSourceStatusChoices.QUEUED
DataSource.objects.filter(pk=self.pk).update(status=self.status)
# Enqueue a sync job
return Job.enqueue(
import_string('core.jobs.sync_datasource'),
instance=self,
user=request.user
)
def get_backend(self): def get_backend(self):
backend_params = self.parameters or {} backend_params = self.parameters or {}
return self.backend_class(self.source_url, **backend_params) return self.backend_class(self.source_url, **backend_params)

View File

@ -31,6 +31,8 @@ class Job(models.Model):
to='contenttypes.ContentType', to='contenttypes.ContentType',
related_name='jobs', related_name='jobs',
on_delete=models.CASCADE, on_delete=models.CASCADE,
blank=True,
null=True
) )
object_id = models.PositiveBigIntegerField( object_id = models.PositiveBigIntegerField(
blank=True, blank=True,
@ -197,25 +199,34 @@ class Job(models.Model):
job_end.send(self) job_end.send(self)
@classmethod @classmethod
def enqueue(cls, func, instance, name='', user=None, schedule_at=None, interval=None, **kwargs): def enqueue(cls, func, instance=None, name='', user=None, schedule_at=None, interval=None, immediate=False, **kwargs):
""" """
Create a Job instance and enqueue a job using the given callable Create a Job instance and enqueue a job using the given callable
Args: Args:
func: The callable object to be enqueued for execution func: The callable object to be enqueued for execution
instance: The NetBox object to which this job pertains instance: The NetBox object to which this job pertains (optional)
name: Name for the job (optional) name: Name for the job (optional)
user: The user responsible for running the job user: The user responsible for running the job
schedule_at: Schedule the job to be executed at the passed date and time schedule_at: Schedule the job to be executed at the passed date and time
interval: Recurrence interval (in minutes) interval: Recurrence interval (in minutes)
immediate: Run the job immediately without scheduling it in the background. Should be used for interactive
management commands only.
""" """
object_type = ObjectType.objects.get_for_model(instance, for_concrete_model=False) if schedule_at and immediate:
rq_queue_name = get_queue_for_model(object_type.model) raise ValueError("enqueue() cannot be called with values for both schedule_at and immediate.")
if instance:
object_type = ObjectType.objects.get_for_model(instance, for_concrete_model=False)
object_id = instance.pk
else:
object_type = object_id = None
rq_queue_name = get_queue_for_model(object_type.model if object_type else None)
queue = django_rq.get_queue(rq_queue_name) queue = django_rq.get_queue(rq_queue_name)
status = JobStatusChoices.STATUS_SCHEDULED if schedule_at else JobStatusChoices.STATUS_PENDING status = JobStatusChoices.STATUS_SCHEDULED if schedule_at else JobStatusChoices.STATUS_PENDING
job = Job.objects.create( job = Job.objects.create(
object_type=object_type, object_type=object_type,
object_id=instance.pk, object_id=object_id,
name=name, name=name,
status=status, status=status,
scheduled=schedule_at, scheduled=schedule_at,
@ -224,8 +235,16 @@ class Job(models.Model):
job_id=uuid.uuid4() job_id=uuid.uuid4()
) )
if schedule_at: # Run the job immediately, rather than enqueuing it as a background task. Note that this is a synchronous
# (blocking) operation, and execution will pause until the job completes.
if immediate:
func(job_id=str(job.job_id), job=job, **kwargs)
# Schedule the job to run at a specific date & time.
elif schedule_at:
queue.enqueue_at(schedule_at, func, job_id=str(job.job_id), job=job, **kwargs) queue.enqueue_at(schedule_at, func, job_id=str(job.job_id), job=job, **kwargs)
# Schedule the job to run asynchronously at this first available opportunity.
else: else:
queue.enqueue(func, job_id=str(job.job_id), job=job, **kwargs) queue.enqueue(func, job_id=str(job.job_id), job=job, **kwargs)

View File

@ -155,7 +155,6 @@ def get_catalog_plugins():
# Populate author (if any) # Populate author (if any)
if data['author']: if data['author']:
print(data['author'])
author = PluginAuthor( author = PluginAuthor(
name=data['author']['name'], name=data['author']['name'],
org_id=data['author']['org_id'], org_id=data['author']['org_id'],

View File

@ -44,7 +44,7 @@ class CatalogPluginTable(BaseTable):
verbose_name=_('Name') verbose_name=_('Name')
) )
author = tables.Column( author = tables.Column(
accessor=tables.A('author.name'), accessor=tables.A('author__name'),
verbose_name=_('Author') verbose_name=_('Author')
) )
is_local = columns.BooleanColumn( is_local = columns.BooleanColumn(

View File

@ -34,6 +34,8 @@ from utilities.htmx import htmx_partial
from utilities.query import count_related from utilities.query import count_related
from utilities.views import ContentTypePermissionRequiredMixin, GetRelatedModelsMixin, register_model_view from utilities.views import ContentTypePermissionRequiredMixin, GetRelatedModelsMixin, register_model_view
from . import filtersets, forms, tables from . import filtersets, forms, tables
from .choices import DataSourceStatusChoices
from .jobs import SyncDataSourceJob
from .models import * from .models import *
from .plugins import get_plugins from .plugins import get_plugins
from .tables import CatalogPluginTable, PluginVersionTable from .tables import CatalogPluginTable, PluginVersionTable
@ -76,7 +78,11 @@ class DataSourceSyncView(BaseObjectView):
def post(self, request, pk): def post(self, request, pk):
datasource = get_object_or_404(self.queryset, pk=pk) datasource = get_object_or_404(self.queryset, pk=pk)
job = datasource.enqueue_sync_job(request)
# Enqueue the sync job & update the DataSource's status
job = SyncDataSourceJob.enqueue(instance=datasource, user=request.user)
datasource.status = DataSourceStatusChoices.QUEUED
DataSource.objects.filter(pk=datasource.pk).update(status=datasource.status)
messages.success(request, f"Queued job #{job.pk} to sync {datasource}") messages.success(request, f"Queued job #{job.pk} to sync {datasource}")
return redirect(datasource.get_absolute_url()) return redirect(datasource.get_absolute_url())

View File

@ -375,6 +375,17 @@ class RackFilterSet(NetBoxModelFilterSet, TenancyFilterSet, ContactModelFilterSe
to_field_name='slug', to_field_name='slug',
label=_('Location (slug)'), label=_('Location (slug)'),
) )
manufacturer_id = django_filters.ModelMultipleChoiceFilter(
field_name='rack_type__manufacturer',
queryset=Manufacturer.objects.all(),
label=_('Manufacturer (ID)'),
)
manufacturer = django_filters.ModelMultipleChoiceFilter(
field_name='rack_type__manufacturer__slug',
queryset=Manufacturer.objects.all(),
to_field_name='slug',
label=_('Manufacturer (slug)'),
)
rack_type = django_filters.ModelMultipleChoiceFilter( rack_type = django_filters.ModelMultipleChoiceFilter(
field_name='rack_type__slug', field_name='rack_type__slug',
queryset=RackType.objects.all(), queryset=RackType.objects.all(),

View File

@ -312,8 +312,8 @@ class RackFilterForm(TenancyFilterForm, ContactModelFilterForm, RackBaseFilterFo
FieldSet('q', 'filter_id', 'tag'), FieldSet('q', 'filter_id', 'tag'),
FieldSet('region_id', 'site_group_id', 'site_id', 'location_id', name=_('Location')), FieldSet('region_id', 'site_group_id', 'site_id', 'location_id', name=_('Location')),
FieldSet('tenant_group_id', 'tenant_id', name=_('Tenant')), FieldSet('tenant_group_id', 'tenant_id', name=_('Tenant')),
FieldSet('status', 'role_id', 'serial', 'asset_tag', name=_('Rack')), FieldSet('status', 'role_id', 'manufacturer_id', 'rack_type_id', 'serial', 'asset_tag', name=_('Rack')),
FieldSet('form_factor', 'width', 'u_height', 'airflow', name=_('Rack Type')), FieldSet('form_factor', 'width', 'u_height', 'airflow', name=_('Hardware')),
FieldSet('starting_unit', 'desc_units', name=_('Numbering')), FieldSet('starting_unit', 'desc_units', name=_('Numbering')),
FieldSet('weight', 'max_weight', 'weight_unit', name=_('Weight')), FieldSet('weight', 'max_weight', 'weight_unit', name=_('Weight')),
FieldSet('contact', 'contact_role', 'contact_group', name=_('Contacts')), FieldSet('contact', 'contact_role', 'contact_group', name=_('Contacts')),
@ -357,6 +357,19 @@ class RackFilterForm(TenancyFilterForm, ContactModelFilterForm, RackBaseFilterFo
null_option='None', null_option='None',
label=_('Role') label=_('Role')
) )
manufacturer_id = DynamicModelMultipleChoiceField(
queryset=Manufacturer.objects.all(),
required=False,
label=_('Manufacturer')
)
rack_type_id = DynamicModelMultipleChoiceField(
queryset=RackType.objects.all(),
required=False,
query_params={
'manufacturer_id': '$manufacturer_id'
},
label=_('Rack type')
)
serial = forms.CharField( serial = forms.CharField(
label=_('Serial'), label=_('Serial'),
required=False required=False

View File

@ -417,6 +417,10 @@ class ModuleType(ImageAttachmentsMixin, PrimaryModel, WeightMixin):
def get_absolute_url(self): def get_absolute_url(self):
return reverse('dcim:moduletype', args=[self.pk]) return reverse('dcim:moduletype', args=[self.pk])
@property
def full_name(self):
return f"{self.manufacturer} {self.model}"
def to_yaml(self): def to_yaml(self):
data = { data = {
'manufacturer': self.manufacturer.name, 'manufacturer': self.manufacturer.name,

View File

@ -152,8 +152,8 @@ class RackType(RackBase):
) )
clone_fields = ( clone_fields = (
'manufacturer', 'form_factor', 'width', 'u_height', 'desc_units', 'outer_width', 'outer_depth', 'outer_unit', 'manufacturer', 'form_factor', 'width', 'u_height', 'airflow', 'desc_units', 'outer_width', 'outer_depth',
'mounting_depth', 'weight', 'max_weight', 'weight_unit', 'outer_unit', 'mounting_depth', 'weight', 'max_weight', 'weight_unit',
) )
prerequisite_models = ( prerequisite_models = (
'dcim.Manufacturer', 'dcim.Manufacturer',
@ -170,6 +170,10 @@ class RackType(RackBase):
def get_absolute_url(self): def get_absolute_url(self):
return reverse('dcim:racktype', args=[self.pk]) return reverse('dcim:racktype', args=[self.pk])
@property
def full_name(self):
return f"{self.manufacturer} {self.name}"
def clean(self): def clean(self):
super().clean() super().clean()

View File

@ -84,6 +84,11 @@ class RackTypeTable(NetBoxTable):
comments = columns.MarkdownColumn( comments = columns.MarkdownColumn(
verbose_name=_('Comments'), verbose_name=_('Comments'),
) )
instance_count = columns.LinkedCountColumn(
viewname='dcim:rack_list',
url_params={'rack_type_id': 'pk'},
verbose_name=_('Instances')
)
tags = columns.TagColumn( tags = columns.TagColumn(
url_name='dcim:rack_list' url_name='dcim:rack_list'
) )
@ -92,11 +97,11 @@ class RackTypeTable(NetBoxTable):
model = RackType model = RackType
fields = ( fields = (
'pk', 'id', 'name', 'manufacturer', 'form_factor', 'u_height', 'starting_unit', 'width', 'outer_width', 'pk', 'id', 'name', 'manufacturer', 'form_factor', 'u_height', 'starting_unit', 'width', 'outer_width',
'outer_depth', 'mounting_depth', 'airflow', 'weight', 'max_weight', 'description', 'comments', 'tags', 'outer_depth', 'mounting_depth', 'airflow', 'weight', 'max_weight', 'description', 'comments',
'created', 'last_updated', 'instance_count', 'tags', 'created', 'last_updated',
) )
default_columns = ( default_columns = (
'pk', 'name', 'manufacturer', 'type', 'u_height', 'description', 'pk', 'name', 'manufacturer', 'type', 'u_height', 'description', 'instance_count',
) )
@ -124,6 +129,15 @@ class RackTable(TenancyColumnsMixin, ContactsColumnMixin, NetBoxTable):
role = columns.ColoredLabelColumn( role = columns.ColoredLabelColumn(
verbose_name=_('Role'), verbose_name=_('Role'),
) )
manufacturer = tables.Column(
verbose_name=_('Manufacturer'),
accessor=Accessor('rack_type__manufacturer'),
linkify=True
)
rack_type = tables.Column(
linkify=True,
verbose_name=_('Type')
)
u_height = tables.TemplateColumn( u_height = tables.TemplateColumn(
template_code="{{ value }}U", template_code="{{ value }}U",
verbose_name=_('Height') verbose_name=_('Height')
@ -169,14 +183,14 @@ class RackTable(TenancyColumnsMixin, ContactsColumnMixin, NetBoxTable):
class Meta(NetBoxTable.Meta): class Meta(NetBoxTable.Meta):
model = Rack model = Rack
fields = ( fields = (
'pk', 'id', 'name', 'site', 'location', 'status', 'facility_id', 'tenant', 'tenant_group', 'role', 'serial', 'pk', 'id', 'name', 'site', 'location', 'status', 'facility_id', 'tenant', 'tenant_group', 'role',
'asset_tag', 'form_factor', 'u_height', 'starting_unit', 'width', 'outer_width', 'outer_depth', 'rack_type', 'serial', 'asset_tag', 'form_factor', 'u_height', 'starting_unit', 'width', 'outer_width',
'mounting_depth', 'airflow', 'weight', 'max_weight', 'comments', 'device_count', 'get_utilization', 'outer_depth', 'mounting_depth', 'airflow', 'weight', 'max_weight', 'comments', 'device_count',
'get_power_utilization', 'description', 'contacts', 'tags', 'created', 'last_updated', 'get_utilization', 'get_power_utilization', 'description', 'contacts', 'tags', 'created', 'last_updated',
) )
default_columns = ( default_columns = (
'pk', 'name', 'site', 'location', 'status', 'facility_id', 'tenant', 'role', 'u_height', 'device_count', 'pk', 'name', 'site', 'location', 'status', 'facility_id', 'tenant', 'role', 'rack_type', 'u_height',
'get_utilization', 'device_count', 'get_utilization',
) )

View File

@ -584,7 +584,9 @@ class RackRoleBulkDeleteView(generic.BulkDeleteView):
# #
class RackTypeListView(generic.ObjectListView): class RackTypeListView(generic.ObjectListView):
queryset = RackType.objects.all() queryset = RackType.objects.annotate(
instance_count=count_related(Rack, 'rack_type')
)
filterset = filtersets.RackTypeFilterSet filterset = filtersets.RackTypeFilterSet
filterset_form = forms.RackTypeFilterForm filterset_form = forms.RackTypeFilterForm
table = tables.RackTypeTable table = tables.RackTypeTable

View File

@ -62,7 +62,7 @@ class CustomFieldSerializer(ValidatedModelSerializer):
fields = [ fields = [
'id', 'url', 'display_url', 'display', 'object_types', 'type', 'related_object_type', 'data_type', 'id', 'url', 'display_url', 'display', 'object_types', 'type', 'related_object_type', 'data_type',
'name', 'label', 'group_name', 'description', 'required', 'search_weight', 'filter_logic', 'ui_visible', 'name', 'label', 'group_name', 'description', 'required', 'search_weight', 'filter_logic', 'ui_visible',
'ui_editable', 'is_cloneable', 'default', 'weight', 'validation_minimum', 'validation_maximum', 'ui_editable', 'is_cloneable', 'default', 'related_object_filter', 'weight', 'validation_minimum', 'validation_maximum',
'validation_regex', 'validation_unique', 'choice_set', 'comments', 'created', 'last_updated', 'validation_regex', 'validation_unique', 'choice_set', 'comments', 'created', 'last_updated',
] ]
brief_fields = ('id', 'url', 'display', 'name', 'description') brief_fields = ('id', 'url', 'display', 'name', 'description')

View File

@ -1,5 +1,6 @@
from django.http import Http404 from django.http import Http404
from django.shortcuts import get_object_or_404 from django.shortcuts import get_object_or_404
from django.utils.module_loading import import_string
from django_rq.queues import get_connection from django_rq.queues import get_connection
from rest_framework import status from rest_framework import status
from rest_framework.decorators import action from rest_framework.decorators import action
@ -11,10 +12,10 @@ from rest_framework.routers import APIRootView
from rest_framework.viewsets import ModelViewSet, ReadOnlyModelViewSet from rest_framework.viewsets import ModelViewSet, ReadOnlyModelViewSet
from rq import Worker from rq import Worker
from core.models import Job, ObjectType from core.models import ObjectType
from extras import filtersets from extras import filtersets
from extras.models import * from extras.models import *
from extras.scripts import run_script from extras.jobs import ScriptJob
from netbox.api.authentication import IsAuthenticatedOrLoginNotRequired from netbox.api.authentication import IsAuthenticatedOrLoginNotRequired
from netbox.api.features import SyncedDataMixin from netbox.api.features import SyncedDataMixin
from netbox.api.metadata import ContentTypeMetadata from netbox.api.metadata import ContentTypeMetadata
@ -273,10 +274,8 @@ class ScriptViewSet(ModelViewSet):
raise RQWorkerNotRunningException() raise RQWorkerNotRunningException()
if input_serializer.is_valid(): if input_serializer.is_valid():
Job.enqueue( ScriptJob.enqueue(
run_script,
instance=script, instance=script,
name=script.python_class.class_name,
user=request.user, user=request.user,
data=input_serializer.data['data'], data=input_serializer.data['data'],
request=copy_safe_request(request), request=copy_safe_request(request),

View File

@ -156,16 +156,16 @@ class LogLevelChoices(ChoiceSet):
LOG_DEBUG = 'debug' LOG_DEBUG = 'debug'
LOG_DEFAULT = 'default' LOG_DEFAULT = 'default'
LOG_SUCCESS = 'success'
LOG_INFO = 'info' LOG_INFO = 'info'
LOG_SUCCESS = 'success'
LOG_WARNING = 'warning' LOG_WARNING = 'warning'
LOG_FAILURE = 'failure' LOG_FAILURE = 'failure'
CHOICES = ( CHOICES = (
(LOG_DEBUG, _('Debug'), 'teal'), (LOG_DEBUG, _('Debug'), 'teal'),
(LOG_DEFAULT, _('Default'), 'gray'), (LOG_DEFAULT, _('Default'), 'gray'),
(LOG_SUCCESS, _('Success'), 'green'),
(LOG_INFO, _('Info'), 'cyan'), (LOG_INFO, _('Info'), 'cyan'),
(LOG_SUCCESS, _('Success'), 'green'),
(LOG_WARNING, _('Warning'), 'yellow'), (LOG_WARNING, _('Warning'), 'yellow'),
(LOG_FAILURE, _('Failure'), 'red'), (LOG_FAILURE, _('Failure'), 'red'),
) )
@ -173,8 +173,8 @@ class LogLevelChoices(ChoiceSet):
SYSTEM_LEVELS = { SYSTEM_LEVELS = {
LOG_DEBUG: logging.DEBUG, LOG_DEBUG: logging.DEBUG,
LOG_DEFAULT: logging.INFO, LOG_DEFAULT: logging.INFO,
LOG_SUCCESS: logging.INFO,
LOG_INFO: logging.INFO, LOG_INFO: logging.INFO,
LOG_SUCCESS: logging.INFO,
LOG_WARNING: logging.WARNING, LOG_WARNING: logging.WARNING,
LOG_FAILURE: logging.ERROR, LOG_FAILURE: logging.ERROR,
} }
@ -191,35 +191,6 @@ class DurationChoices(ChoiceSet):
) )
#
# Job results
#
class JobResultStatusChoices(ChoiceSet):
STATUS_PENDING = 'pending'
STATUS_SCHEDULED = 'scheduled'
STATUS_RUNNING = 'running'
STATUS_COMPLETED = 'completed'
STATUS_ERRORED = 'errored'
STATUS_FAILED = 'failed'
CHOICES = (
(STATUS_PENDING, _('Pending'), 'cyan'),
(STATUS_SCHEDULED, _('Scheduled'), 'gray'),
(STATUS_RUNNING, _('Running'), 'blue'),
(STATUS_COMPLETED, _('Completed'), 'green'),
(STATUS_ERRORED, _('Errored'), 'red'),
(STATUS_FAILED, _('Failed'), 'red'),
)
TERMINAL_STATE_CHOICES = (
STATUS_COMPLETED,
STATUS_ERRORED,
STATUS_FAILED,
)
# #
# Webhooks # Webhooks
# #

View File

@ -136,10 +136,10 @@ DEFAULT_DASHBOARD = [
] ]
LOG_LEVEL_RANK = { LOG_LEVEL_RANK = {
LogLevelChoices.LOG_DEFAULT: 0, LogLevelChoices.LOG_DEBUG: 0,
LogLevelChoices.LOG_DEBUG: 1, LogLevelChoices.LOG_DEFAULT: 1,
LogLevelChoices.LOG_SUCCESS: 2, LogLevelChoices.LOG_INFO: 2,
LogLevelChoices.LOG_INFO: 3, LogLevelChoices.LOG_SUCCESS: 3,
LogLevelChoices.LOG_WARNING: 4, LogLevelChoices.LOG_WARNING: 4,
LogLevelChoices.LOG_FAILURE: 5, LogLevelChoices.LOG_FAILURE: 5,
} }

View File

@ -1,5 +1,6 @@
from collections import defaultdict from collections import defaultdict
import logging import logging
from collections import defaultdict
from django.conf import settings from django.conf import settings
from django.contrib.auth import get_user_model from django.contrib.auth import get_user_model
@ -10,7 +11,6 @@ from django.utils.translation import gettext as _
from django_rq import get_queue from django_rq import get_queue
from core.events import * from core.events import *
from core.models import Job
from netbox.config import get_config from netbox.config import get_config
from netbox.constants import RQ_QUEUE_DEFAULT from netbox.constants import RQ_QUEUE_DEFAULT
from netbox.registry import registry from netbox.registry import registry
@ -126,8 +126,8 @@ def process_event_rules(event_rules, object_type, event_type, data, username=Non
script = event_rule.action_object.python_class() script = event_rule.action_object.python_class()
# Enqueue a Job to record the script's execution # Enqueue a Job to record the script's execution
Job.enqueue( from extras.jobs import ScriptJob
"extras.scripts.run_script", ScriptJob.enqueue(
instance=event_rule.action_object, instance=event_rule.action_object,
name=script.name, name=script.name,
user=user, user=user,

View File

@ -67,7 +67,7 @@ class CustomFieldForm(forms.ModelForm):
FieldSet( FieldSet(
'search_weight', 'filter_logic', 'ui_visible', 'ui_editable', 'weight', 'is_cloneable', name=_('Behavior') 'search_weight', 'filter_logic', 'ui_visible', 'ui_editable', 'weight', 'is_cloneable', name=_('Behavior')
), ),
FieldSet('default', 'choice_set', name=_('Values')), FieldSet('default', 'choice_set', 'related_object_filter', name=_('Values')),
FieldSet( FieldSet(
'validation_minimum', 'validation_maximum', 'validation_regex', 'validation_unique', name=_('Validation') 'validation_minimum', 'validation_maximum', 'validation_regex', 'validation_unique', name=_('Validation')
), ),

107
netbox/extras/jobs.py Normal file
View File

@ -0,0 +1,107 @@
import logging
import traceback
from contextlib import nullcontext
from django.db import transaction
from django.utils.translation import gettext as _
from extras.models import Script as ScriptModel
from extras.signals import clear_events
from netbox.context_managers import event_tracking
from utilities.exceptions import AbortScript, AbortTransaction
from utilities.jobs import JobRunner
from .utils import is_report
class ScriptJob(JobRunner):
"""
Script execution job.
A wrapper for calling Script.run(). This performs error handling and provides a hook for committing changes. It
exists outside the Script class to ensure it cannot be overridden by a script author.
"""
class Meta:
# An explicit job name is not set because it doesn't make sense in this context. Currently, there's no scenario
# where jobs other than this one are used. Therefore, it is hidden, resulting in a cleaner job table overview.
name = ''
def run_script(self, script, request, data, commit):
"""
Core script execution task. We capture this within a method to allow for conditionally wrapping it with the
event_tracking context manager (which is bypassed if commit == False).
Args:
request: The WSGI request associated with this execution (if any)
data: A dictionary of data to be passed to the script upon execution
commit: Passed through to Script.run()
"""
logger = logging.getLogger(f"netbox.scripts.{script.full_name}")
logger.info(f"Running script (commit={commit})")
try:
try:
with transaction.atomic():
script.output = script.run(data, commit)
if not commit:
raise AbortTransaction()
except AbortTransaction:
script.log_info(message=_("Database changes have been reverted automatically."))
if script.failed:
logger.warning(f"Script failed")
raise
except Exception as e:
if type(e) is AbortScript:
msg = _("Script aborted with error: ") + str(e)
if is_report(type(script)):
script.log_failure(message=msg)
else:
script.log_failure(msg)
logger.error(f"Script aborted with error: {e}")
else:
stacktrace = traceback.format_exc()
script.log_failure(
message=_("An exception occurred: ") + f"`{type(e).__name__}: {e}`\n```\n{stacktrace}\n```"
)
logger.error(f"Exception raised during script execution: {e}")
if type(e) is not AbortTransaction:
script.log_info(message=_("Database changes have been reverted due to error."))
# Clear all pending events. Job termination (including setting the status) is handled by the job framework.
if request:
clear_events.send(request)
raise
# Update the job data regardless of the execution status of the job. Successes should be reported as well as
# failures.
finally:
self.job.data = script.get_job_data()
def run(self, data, request=None, commit=True, **kwargs):
"""
Run the script.
Args:
job: The Job associated with this execution
data: A dictionary of data to be passed to the script upon execution
request: The WSGI request associated with this execution (if any)
commit: Passed through to Script.run()
"""
script = ScriptModel.objects.get(pk=self.job.object_id).python_class()
# Add files to form data
if request:
files = request.FILES
for field_name, fileobj in files.items():
data[field_name] = fileobj
# Add the current request as a property of the script
script.request = request
# Execute the script. If commit is True, wrap it with the event_tracking context manager to ensure we process
# change logging, event rules, etc.
with event_tracking(request) if commit else nullcontext():
self.run_script(script, request, data, commit)

View File

@ -1,19 +1,14 @@
import json import json
import logging import logging
import sys import sys
import traceback
import uuid import uuid
from django.contrib.auth import get_user_model from django.contrib.auth import get_user_model
from django.core.management.base import BaseCommand, CommandError from django.core.management.base import BaseCommand, CommandError
from django.db import transaction from django.utils.module_loading import import_string
from core.choices import JobStatusChoices from extras.jobs import ScriptJob
from core.models import Job
from extras.scripts import get_module_and_script from extras.scripts import get_module_and_script
from extras.signals import clear_events
from netbox.context_managers import event_tracking
from utilities.exceptions import AbortTransaction
from utilities.request import NetBoxFakeRequest from utilities.request import NetBoxFakeRequest
@ -33,44 +28,6 @@ class Command(BaseCommand):
parser.add_argument('script', help="Script to run") parser.add_argument('script', help="Script to run")
def handle(self, *args, **options): def handle(self, *args, **options):
def _run_script():
"""
Core script execution task. We capture this within a subfunction to allow for conditionally wrapping it with
the event_tracking context manager (which is bypassed if commit == False).
"""
try:
try:
with transaction.atomic():
script.output = script.run(data=data, commit=commit)
if not commit:
raise AbortTransaction()
except AbortTransaction:
script.log_info("Database changes have been reverted automatically.")
clear_events.send(request)
job.data = script.get_job_data()
job.terminate()
except Exception as e:
stacktrace = traceback.format_exc()
script.log_failure(
f"An exception occurred: `{type(e).__name__}: {e}`\n```\n{stacktrace}\n```"
)
script.log_info("Database changes have been reverted due to error.")
logger.error(f"Exception raised during script execution: {e}")
clear_events.send(request)
job.data = script.get_job_data()
job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))
# Print any test method results
for test_name, attrs in job.data['tests'].items():
self.stdout.write(
"\t{}: {} success, {} info, {} warning, {} failure".format(
test_name, attrs['success'], attrs['info'], attrs['warning'], attrs['failure']
)
)
logger.info(f"Script completed in {job.duration}")
User = get_user_model() User = get_user_model()
# Params # Params
@ -84,8 +41,8 @@ class Command(BaseCommand):
data = {} data = {}
module_name, script_name = script.split('.', 1) module_name, script_name = script.split('.', 1)
module, script = get_module_and_script(module_name, script_name) module, script_obj = get_module_and_script(module_name, script_name)
script = script.python_class script = script_obj.python_class
# Take user from command line if provided and exists, other # Take user from command line if provided and exists, other
if options['user']: if options['user']:
@ -120,40 +77,29 @@ class Command(BaseCommand):
# Initialize the script form # Initialize the script form
script = script() script = script()
form = script.as_form(data, None) form = script.as_form(data, None)
if not form.is_valid():
# Create the job
job = Job.objects.create(
object=module,
name=script.class_name,
user=User.objects.filter(is_superuser=True).order_by('pk')[0],
job_id=uuid.uuid4()
)
request = NetBoxFakeRequest({
'META': {},
'POST': data,
'GET': {},
'FILES': {},
'user': user,
'path': '',
'id': job.job_id
})
if form.is_valid():
job.status = JobStatusChoices.STATUS_RUNNING
job.save()
logger.info(f"Running script (commit={commit})")
script.request = request
# Execute the script. If commit is True, wrap it with the event_tracking context manager to ensure we process
# change logging, webhooks, etc.
with event_tracking(request):
_run_script()
else:
logger.error('Data is not valid:') logger.error('Data is not valid:')
for field, errors in form.errors.get_json_data().items(): for field, errors in form.errors.get_json_data().items():
for error in errors: for error in errors:
logger.error(f'\t{field}: {error.get("message")}') logger.error(f'\t{field}: {error.get("message")}')
job.status = JobStatusChoices.STATUS_ERRORED raise CommandError()
job.save()
# Execute the script.
job = ScriptJob.enqueue(
instance=script_obj,
user=user,
immediate=True,
data=data,
request=NetBoxFakeRequest({
'META': {},
'POST': data,
'GET': {},
'FILES': {},
'user': user,
'path': '',
'id': uuid.uuid4()
}),
commit=commit,
)
logger.info(f"Script completed in {job.duration}")

View File

@ -0,0 +1,18 @@
# Generated by Django 5.0.7 on 2024-07-26 01:49
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('extras', '0119_eventrule_event_types'),
]
operations = [
migrations.AddField(
model_name='customfield',
name='related_object_filter',
field=models.JSONField(blank=True, null=True),
),
]

View File

@ -154,6 +154,14 @@ class CustomField(CloningMixin, ExportTemplatesMixin, ChangeLoggedModel):
'Default value for the field (must be a JSON value). Encapsulate strings with double quotes (e.g. "Foo").' 'Default value for the field (must be a JSON value). Encapsulate strings with double quotes (e.g. "Foo").'
) )
) )
related_object_filter = models.JSONField(
blank=True,
null=True,
help_text=_(
'Filter the object selection choices using a query_params dict (must be a JSON value).'
'Encapsulate strings with double quotes (e.g. "Foo").'
)
)
weight = models.PositiveSmallIntegerField( weight = models.PositiveSmallIntegerField(
default=100, default=100,
verbose_name=_('display weight'), verbose_name=_('display weight'),
@ -373,6 +381,17 @@ class CustomField(CloningMixin, ExportTemplatesMixin, ChangeLoggedModel):
.format(type=self.get_type_display()) .format(type=self.get_type_display())
}) })
# Related object filter can be set only for object-type fields, and must contain a dictionary mapping (if set)
if self.related_object_filter is not None:
if self.type not in (CustomFieldTypeChoices.TYPE_OBJECT, CustomFieldTypeChoices.TYPE_MULTIOBJECT):
raise ValidationError({
'related_object_filter': _("A related object filter can be defined only for object fields.")
})
if type(self.related_object_filter) is not dict:
raise ValidationError({
'related_object_filter': _("Filter must be defined as a dictionary mapping attributes to values.")
})
def serialize(self, value): def serialize(self, value):
""" """
Prepare a value for storage as JSON data. Prepare a value for storage as JSON data.
@ -511,7 +530,8 @@ class CustomField(CloningMixin, ExportTemplatesMixin, ChangeLoggedModel):
field = field_class( field = field_class(
queryset=model.objects.all(), queryset=model.objects.all(),
required=required, required=required,
initial=initial initial=initial,
query_params=self.related_object_filter
) )
# Multiple objects # Multiple objects
@ -522,6 +542,7 @@ class CustomField(CloningMixin, ExportTemplatesMixin, ChangeLoggedModel):
queryset=model.objects.all(), queryset=model.objects.all(),
required=required, required=required,
initial=initial, initial=initial,
query_params=self.related_object_filter
) )
# Text # Text

View File

@ -27,7 +27,7 @@ def get_event_type_choices():
""" """
return [ return [
(name, event.text) (name, event.text)
for name, event in registry['events'].items() for name, event in registry['event_types'].items()
] ]
@ -102,7 +102,7 @@ class Notification(models.Model):
""" """
Returns the registered Event which triggered this Notification. Returns the registered Event which triggered this Notification.
""" """
return registry['events'].get(self.event_type) return registry['event_types'].get(self.event_type)
class NotificationGroup(ChangeLoggedModel): class NotificationGroup(ChangeLoggedModel):

View File

@ -2,32 +2,23 @@ import inspect
import json import json
import logging import logging
import os import os
import traceback
from datetime import timedelta
import yaml import yaml
from django import forms from django import forms
from django.conf import settings from django.conf import settings
from django.core.validators import RegexValidator from django.core.validators import RegexValidator
from django.db import transaction
from django.utils import timezone from django.utils import timezone
from django.utils.functional import classproperty from django.utils.functional import classproperty
from django.utils.translation import gettext as _ from django.utils.translation import gettext as _
from core.choices import JobStatusChoices
from core.models import Job
from extras.choices import LogLevelChoices from extras.choices import LogLevelChoices
from extras.models import ScriptModule, Script as ScriptModel from extras.models import ScriptModule
from extras.signals import clear_events
from ipam.formfields import IPAddressFormField, IPNetworkFormField from ipam.formfields import IPAddressFormField, IPNetworkFormField
from ipam.validators import MaxPrefixLengthValidator, MinPrefixLengthValidator, prefix_validator from ipam.validators import MaxPrefixLengthValidator, MinPrefixLengthValidator, prefix_validator
from netbox.context_managers import event_tracking
from utilities.exceptions import AbortScript, AbortTransaction
from utilities.forms import add_blank_choice from utilities.forms import add_blank_choice
from utilities.forms.fields import DynamicModelChoiceField, DynamicModelMultipleChoiceField from utilities.forms.fields import DynamicModelChoiceField, DynamicModelMultipleChoiceField
from utilities.forms.widgets import DatePicker, DateTimePicker from utilities.forms.widgets import DatePicker, DateTimePicker
from .forms import ScriptForm from .forms import ScriptForm
from .utils import is_report
__all__ = ( __all__ = (
@ -48,7 +39,6 @@ __all__ = (
'StringVar', 'StringVar',
'TextVar', 'TextVar',
'get_module_and_script', 'get_module_and_script',
'run_script',
) )
@ -613,111 +603,3 @@ def get_module_and_script(module_name, script_name):
module = ScriptModule.objects.get(file_path=f'{module_name}.py') module = ScriptModule.objects.get(file_path=f'{module_name}.py')
script = module.scripts.get(name=script_name) script = module.scripts.get(name=script_name)
return module, script return module, script
def run_script(data, job, request=None, commit=True, **kwargs):
"""
A wrapper for calling Script.run(). This performs error handling and provides a hook for committing changes. It
exists outside the Script class to ensure it cannot be overridden by a script author.
Args:
data: A dictionary of data to be passed to the script upon execution
job: The Job associated with this execution
request: The WSGI request associated with this execution (if any)
commit: Passed through to Script.run()
"""
job.start()
script = ScriptModel.objects.get(pk=job.object_id).python_class()
logger = logging.getLogger(f"netbox.scripts.{script.full_name}")
logger.info(f"Running script (commit={commit})")
# Add files to form data
if request:
files = request.FILES
for field_name, fileobj in files.items():
data[field_name] = fileobj
# Add the current request as a property of the script
script.request = request
def set_job_data(script):
job.data = {
'log': script.messages,
'output': script.output,
'tests': script.tests,
}
return job
def _run_script(job):
"""
Core script execution task. We capture this within a subfunction to allow for conditionally wrapping it with
the event_tracking context manager (which is bypassed if commit == False).
"""
try:
try:
with transaction.atomic():
script.output = script.run(data, commit)
if not commit:
raise AbortTransaction()
except AbortTransaction:
script.log_info(message=_("Database changes have been reverted automatically."))
if request:
clear_events.send(request)
job.data = script.get_job_data()
if script.failed:
logger.warning(f"Script failed")
job.terminate(status=JobStatusChoices.STATUS_FAILED)
else:
job.terminate()
except Exception as e:
if type(e) is AbortScript:
msg = _("Script aborted with error: ") + str(e)
if is_report(type(script)):
script.log_failure(message=msg)
else:
script.log_failure(msg)
logger.error(f"Script aborted with error: {e}")
else:
stacktrace = traceback.format_exc()
script.log_failure(
message=_("An exception occurred: ") + f"`{type(e).__name__}: {e}`\n```\n{stacktrace}\n```"
)
logger.error(f"Exception raised during script execution: {e}")
script.log_info(message=_("Database changes have been reverted due to error."))
job.data = script.get_job_data()
job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))
if request:
clear_events.send(request)
logger.info(f"Script completed in {job.duration}")
# Execute the script. If commit is True, wrap it with the event_tracking context manager to ensure we process
# change logging, event rules, etc.
if commit:
with event_tracking(request):
_run_script(job)
else:
_run_script(job)
# Schedule the next job if an interval has been set
if job.interval:
new_scheduled_time = job.scheduled + timedelta(minutes=job.interval)
Job.enqueue(
run_script,
instance=job.object,
name=job.name,
user=job.user,
schedule_at=new_scheduled_time,
interval=job.interval,
job_timeout=script.job_timeout,
data=data,
request=request,
commit=commit
)

View File

@ -23,7 +23,7 @@ from virtualization.models import Cluster, ClusterGroup, ClusterType
class CustomFieldTestCase(TestCase, ChangeLoggedFilterSetTests): class CustomFieldTestCase(TestCase, ChangeLoggedFilterSetTests):
queryset = CustomField.objects.all() queryset = CustomField.objects.all()
filterset = CustomFieldFilterSet filterset = CustomFieldFilterSet
ignore_fields = ('default',) ignore_fields = ('default', 'related_object_filter')
@classmethod @classmethod
def setUpTestData(cls): def setUpTestData(cls):

View File

@ -6,6 +6,7 @@ from django.db.models import Count, Q
from django.http import HttpResponseBadRequest, HttpResponseForbidden, HttpResponse from django.http import HttpResponseBadRequest, HttpResponseForbidden, HttpResponse
from django.shortcuts import get_object_or_404, redirect, render from django.shortcuts import get_object_or_404, redirect, render
from django.urls import reverse from django.urls import reverse
from django.utils.module_loading import import_string
from django.utils import timezone from django.utils import timezone
from django.utils.translation import gettext as _ from django.utils.translation import gettext as _
from django.views.generic import View from django.views.generic import View
@ -35,7 +36,6 @@ from virtualization.models import VirtualMachine
from . import filtersets, forms, tables from . import filtersets, forms, tables
from .constants import LOG_LEVEL_RANK from .constants import LOG_LEVEL_RANK
from .models import * from .models import *
from .scripts import run_script
from .tables import ReportResultsTable, ScriptResultsTable from .tables import ReportResultsTable, ScriptResultsTable
@ -551,14 +551,6 @@ class EventRuleListView(generic.ObjectListView):
class EventRuleView(generic.ObjectView): class EventRuleView(generic.ObjectView):
queryset = EventRule.objects.all() queryset = EventRule.objects.all()
def get_extra_context(self, request, instance):
return {
'event_types': [
event for name, event in registry['events'].items()
if name in instance.event_types
]
}
@register_model_view(EventRule, 'edit') @register_model_view(EventRule, 'edit')
class EventRuleEditView(generic.ObjectEditView): class EventRuleEditView(generic.ObjectEditView):
@ -1175,10 +1167,9 @@ class ScriptView(BaseScriptView):
if not get_workers_for_queue('default'): if not get_workers_for_queue('default'):
messages.error(request, _("Unable to run script: RQ worker process not running.")) messages.error(request, _("Unable to run script: RQ worker process not running."))
elif form.is_valid(): elif form.is_valid():
job = Job.enqueue( ScriptJob = import_string("extras.jobs.ScriptJob")
run_script, job = ScriptJob.enqueue(
instance=script, instance=script,
name=script_class.class_name,
user=request.user, user=request.user,
schedule_at=form.cleaned_data.pop('_schedule_at'), schedule_at=form.cleaned_data.pop('_schedule_at'),
interval=form.cleaned_data.pop('_interval'), interval=form.cleaned_data.pop('_interval'),
@ -1246,7 +1237,10 @@ class ScriptResultView(TableMixin, generic.ObjectView):
table = None table = None
index = 0 index = 0
log_threshold = LOG_LEVEL_RANK.get(request.GET.get('log_threshold', LogLevelChoices.LOG_DEFAULT)) try:
log_threshold = LOG_LEVEL_RANK[request.GET.get('log_threshold', LogLevelChoices.LOG_DEBUG)]
except KeyError:
log_threshold = LOG_LEVEL_RANK[LogLevelChoices.LOG_DEBUG]
if job.data: if job.data:
if 'log' in job.data: if 'log' in job.data:
@ -1303,12 +1297,16 @@ class ScriptResultView(TableMixin, generic.ObjectView):
if job.completed: if job.completed:
table = self.get_table(job, request, bulk_actions=False) table = self.get_table(job, request, bulk_actions=False)
log_threshold = request.GET.get('log_threshold', LogLevelChoices.LOG_DEBUG)
if log_threshold not in LOG_LEVEL_RANK:
log_threshold = LogLevelChoices.LOG_DEBUG
context = { context = {
'script': job.object, 'script': job.object,
'job': job, 'job': job,
'table': table, 'table': table,
'log_levels': dict(LogLevelChoices), 'log_levels': dict(LogLevelChoices),
'log_threshold': request.GET.get('log_threshold', LogLevelChoices.LOG_DEFAULT) 'log_threshold': log_threshold,
} }
if job.data and 'log' in job.data: if job.data and 'log' in job.data:

View File

@ -90,42 +90,45 @@ def add_available_ipaddresses(prefix, ipaddress_list, is_pool=False):
return output return output
def available_vlans_from_range(vlans, vlan_group, vlan_range): def available_vlans_from_range(vlans, vlan_group, vid_range):
""" """
Create fake records for all gaps between used VLANs Create fake records for all gaps between used VLANs
""" """
min_vid = int(vlan_range.lower) if vlan_range else VLAN_VID_MIN min_vid = int(vid_range.lower) if vid_range else VLAN_VID_MIN
max_vid = int(vlan_range.upper) if vlan_range else VLAN_VID_MAX max_vid = int(vid_range.upper) if vid_range else VLAN_VID_MAX
if not vlans: if not vlans:
return [{ return [{
'vid': min_vid, 'vid': min_vid,
'vlan_group': vlan_group, 'vlan_group': vlan_group,
'available': max_vid - min_vid + 1 'available': max_vid - min_vid
}] }]
prev_vid = max_vid prev_vid = min_vid - 1
new_vlans = [] new_vlans = []
for vlan in vlans: for vlan in vlans:
# Ignore VIDs outside the range
if not min_vid <= vlan.vid < max_vid:
continue
# Annotate any available VIDs between the previous (or minimum) VID
# and the current VID
if vlan.vid - prev_vid > 1: if vlan.vid - prev_vid > 1:
new_vlans.append({ new_vlans.append({
'vid': prev_vid + 1, 'vid': prev_vid + 1,
'vlan_group': vlan_group, 'vlan_group': vlan_group,
'available': vlan.vid - prev_vid - 1, 'available': vlan.vid - prev_vid - 1,
}) })
prev_vid = vlan.vid prev_vid = vlan.vid
if vlans[0].vid > min_vid: # Annotate any remaining available VLANs
new_vlans.append({
'vid': min_vid,
'vlan_group': vlan_group,
'available': vlans[0].vid - min_vid,
})
if prev_vid < max_vid: if prev_vid < max_vid:
new_vlans.append({ new_vlans.append({
'vid': prev_vid + 1, 'vid': prev_vid + 1,
'vlan_group': vlan_group, 'vlan_group': vlan_group,
'available': max_vid - prev_vid, 'available': max_vid - prev_vid - 1,
}) })
return new_vlans return new_vlans
@ -136,8 +139,8 @@ def add_available_vlans(vlans, vlan_group):
Create fake records for all gaps between used VLANs Create fake records for all gaps between used VLANs
""" """
new_vlans = [] new_vlans = []
for vlan_range in vlan_group.vid_ranges: for vid_range in vlan_group.vid_ranges:
new_vlans.extend(available_vlans_from_range(vlans, vlan_group, vlan_range)) new_vlans.extend(available_vlans_from_range(vlans, vlan_group, vid_range))
vlans = list(vlans) + new_vlans vlans = list(vlans) + new_vlans
vlans.sort(key=lambda v: v.vid if type(v) is VLAN else v['vid']) vlans.sort(key=lambda v: v.vid if type(v) is VLAN else v['vid'])

View File

@ -23,6 +23,9 @@ ADVISORY_LOCK_KEYS = {
'wirelesslangroup': 105600, 'wirelesslangroup': 105600,
'inventoryitem': 105700, 'inventoryitem': 105700,
'inventoryitemtemplate': 105800, 'inventoryitemtemplate': 105800,
# Jobs
'job-schedules': 110100,
} }
# Default view action permission mapping # Default view action permission mapping

View File

@ -2,41 +2,41 @@ from dataclasses import dataclass
from netbox.registry import registry from netbox.registry import registry
EVENT_TYPE_INFO = 'info' EVENT_TYPE_KIND_INFO = 'info'
EVENT_TYPE_SUCCESS = 'success' EVENT_TYPE_KIND_SUCCESS = 'success'
EVENT_TYPE_WARNING = 'warning' EVENT_TYPE_KIND_WARNING = 'warning'
EVENT_TYPE_DANGER = 'danger' EVENT_TYPE_KIND_DANGER = 'danger'
__all__ = ( __all__ = (
'EVENT_TYPE_DANGER', 'EVENT_TYPE_KIND_DANGER',
'EVENT_TYPE_INFO', 'EVENT_TYPE_KIND_INFO',
'EVENT_TYPE_SUCCESS', 'EVENT_TYPE_KIND_SUCCESS',
'EVENT_TYPE_WARNING', 'EVENT_TYPE_KIND_WARNING',
'Event', 'EventType',
'get_event', 'get_event_type',
'get_event_type_choices', 'get_event_type_choices',
'get_event_text', 'get_event_text',
) )
def get_event(name): def get_event_type(name):
return registry['events'].get(name) return registry['event_types'].get(name)
def get_event_text(name): def get_event_text(name):
if event := registry['events'].get(name): if event := registry['event_types'].get(name):
return event.text return event.text
return '' return ''
def get_event_type_choices(): def get_event_type_choices():
return [ return [
(event.name, event.text) for event in registry['events'].values() (event.name, event.text) for event in registry['event_types'].values()
] ]
@dataclass @dataclass
class Event: class EventType:
""" """
A type of event which can occur in NetBox. Event rules can be defined to automatically A type of event which can occur in NetBox. Event rules can be defined to automatically
perform some action in response to an event. perform some action in response to an event.
@ -44,32 +44,32 @@ class Event:
Args: Args:
name: The unique name under which the event is registered. name: The unique name under which the event is registered.
text: The human-friendly event name. This should support translation. text: The human-friendly event name. This should support translation.
type: The event's classification (info, success, warning, or danger). The default type is info. kind: The event's classification (info, success, warning, or danger). The default type is info.
""" """
name: str name: str
text: str text: str
type: str = EVENT_TYPE_INFO kind: str = EVENT_TYPE_KIND_INFO
def __str__(self): def __str__(self):
return self.text return self.text
def register(self): def register(self):
if self.name in registry['events']: if self.name in registry['event_types']:
raise Exception(f"An event named {self.name} has already been registered!") raise Exception(f"An event type named {self.name} has already been registered!")
registry['events'][self.name] = self registry['event_types'][self.name] = self
def color(self): def color(self):
return { return {
EVENT_TYPE_INFO: 'blue', EVENT_TYPE_KIND_INFO: 'blue',
EVENT_TYPE_SUCCESS: 'green', EVENT_TYPE_KIND_SUCCESS: 'green',
EVENT_TYPE_WARNING: 'orange', EVENT_TYPE_KIND_WARNING: 'orange',
EVENT_TYPE_DANGER: 'red', EVENT_TYPE_KIND_DANGER: 'red',
}.get(self.type) }.get(self.kind)
def icon(self): def icon(self):
return { return {
EVENT_TYPE_INFO: 'mdi mdi-information', EVENT_TYPE_KIND_INFO: 'mdi mdi-information',
EVENT_TYPE_SUCCESS: 'mdi mdi-check-circle', EVENT_TYPE_KIND_SUCCESS: 'mdi mdi-check-circle',
EVENT_TYPE_WARNING: 'mdi mdi-alert-box', EVENT_TYPE_KIND_WARNING: 'mdi mdi-alert-box',
EVENT_TYPE_DANGER: 'mdi mdi-alert-octagon', EVENT_TYPE_KIND_DANGER: 'mdi mdi-alert-octagon',
}.get(self.type) }.get(self.kind)

View File

@ -289,7 +289,7 @@ class CustomFieldsMixin(models.Model):
# Validate uniqueness if enforced # Validate uniqueness if enforced
if custom_fields[field_name].validation_unique and value not in CUSTOMFIELD_EMPTY_VALUES: if custom_fields[field_name].validation_unique and value not in CUSTOMFIELD_EMPTY_VALUES:
if self._meta.model.objects.filter(**{ if self._meta.model.objects.exclude(pk=self.pk).filter(**{
f'custom_field_data__{field_name}': value f'custom_field_data__{field_name}': value
}).exists(): }).exists():
raise ValidationError(_("Custom field '{name}' must have a unique value.").format( raise ValidationError(_("Custom field '{name}' must have a unique value.").format(

View File

@ -25,7 +25,7 @@ registry = Registry({
'counter_fields': collections.defaultdict(dict), 'counter_fields': collections.defaultdict(dict),
'data_backends': dict(), 'data_backends': dict(),
'denormalized_fields': collections.defaultdict(list), 'denormalized_fields': collections.defaultdict(list),
'events': dict(), 'event_types': dict(),
'model_features': dict(), 'model_features': dict(),
'models': collections.defaultdict(set), 'models': collections.defaultdict(set),
'plugins': dict(), 'plugins': dict(),

View File

@ -20,23 +20,23 @@
<table class="table table-hover attr-table"> <table class="table table-hover attr-table">
<tr> <tr>
<th scope="row">{% trans "Group" %}</th> <th scope="row">{% trans "Group" %}</th>
<td>{{ object.group }}</td> <td>{{ object.group|linkify }}</td>
</tr> </tr>
<tr> <tr>
<th scope="row">{% trans "Circuit" %}</th> <th scope="row">{% trans "Circuit" %}</th>
<td>{{ object.circuit }}</td> <td>{{ object.circuit|linkify }}</td>
</tr> </tr>
<tr> <tr>
<th scope="row">{% trans "Priority" %}</th> <th scope="row">{% trans "Priority" %}</th>
<td>{{ object.priority }}</td> <td>{{ object.get_priority_display }}</td>
</tr> </tr>
</table> </table>
</div> </div>
{% include 'inc/panels/tags.html' %} {% include 'inc/panels/tags.html' %}
{% include 'inc/panels/custom_fields.html' %}
{% plugin_left_page object %} {% plugin_left_page object %}
</div> </div>
<div class="col col-md-6"> <div class="col col-md-6">
{% include 'inc/panels/custom_fields.html' %}
{% plugin_right_page object %} {% plugin_right_page object %}
</div> </div>
</div> </div>

View File

@ -32,7 +32,7 @@
{% trans "Overview" %} {% trans "Overview" %}
</a> </a>
</li> </li>
{% if True or not plugin.is_local and 'commercial' not in settings.RELEASE.features %} {% if not plugin.is_local and not settings.RELEASE.features.commercial %}
<li class="nav-item" role="presentation"> <li class="nav-item" role="presentation">
<button class="nav-link" id="install-tab" data-bs-toggle="tab" data-bs-target="#install" type="button" role="tab" aria-controls="object-list" aria-selected="false"> <button class="nav-link" id="install-tab" data-bs-toggle="tab" data-bs-target="#install" type="button" role="tab" aria-controls="object-list" aria-selected="false">
{% trans "Install" %} {% trans "Install" %}
@ -100,7 +100,7 @@
</div> </div>
</div> </div>
</div> </div>
{% if True or not plugin.is_local and 'commercial' not in settings.RELEASE.features %} {% if not plugin.is_local and not settings.RELEASE.features.commercial %}
<div class="tab-pane" id="install" role="tabpanel" aria-labelledby="install-tab"> <div class="tab-pane" id="install" role="tabpanel" aria-labelledby="install-tab">
<div class="card"> <div class="card">
<h2 class="card-header">{% trans "Local Installation Instructions" %}</h2> <h2 class="card-header">{% trans "Local Installation Instructions" %}</h2>

View File

@ -60,7 +60,7 @@
</tr> </tr>
<tr> <tr>
<th scope="row">{% trans "Module Type" %}</th> <th scope="row">{% trans "Module Type" %}</th>
<td>{{ object.module_type|linkify }}</td> <td>{{ object.module_type|linkify:"full_name" }}</td>
</tr> </tr>
<tr> <tr>
<th scope="row">{% trans "Status" %}</th> <th scope="row">{% trans "Status" %}</th>

View File

@ -43,7 +43,7 @@
</tr> </tr>
<tr> <tr>
<th scope="row">{% trans "Rack Type" %}</th> <th scope="row">{% trans "Rack Type" %}</th>
<td>{{ object.rack_type|linkify|placeholder }}</td> <td>{{ object.rack_type|linkify:"full_name"|placeholder }}</td>
</tr> </tr>
<tr> <tr>
<th scope="row">{% trans "Role" %}</th> <th scope="row">{% trans "Role" %}</th>

View File

@ -52,6 +52,14 @@
<th scope="row">{% trans "Default Value" %}</th> <th scope="row">{% trans "Default Value" %}</th>
<td>{{ object.default }}</td> <td>{{ object.default }}</td>
</tr> </tr>
<tr>
<th scope="row">{% trans "Related object filter" %}</th>
{% if object.related_object_filter %}
<td><pre>{{ object.related_object_filter|json }}</pre></td>
{% else %}
<td>{{ ''|placeholder }}</td>
{% endif %}
</tr>
</table> </table>
</div> </div>
<div class="card"> <div class="card">

View File

@ -36,7 +36,7 @@
<div class="card"> <div class="card">
<h2 class="card-header">{% trans "Event Types" %}</h2> <h2 class="card-header">{% trans "Event Types" %}</h2>
<ul class="list-group list-group-flush"> <ul class="list-group list-group-flush">
{% for name, event in registry.events.items %} {% for name, event in registry.event_types.items %}
<li class="list-group-item"> <li class="list-group-item">
<div class="row align-items-center"> <div class="row align-items-center">
<div class="col-auto"> <div class="col-auto">

View File

@ -53,7 +53,7 @@
<div class="dropdown-menu"> <div class="dropdown-menu">
{% for level, name in log_levels.items %} {% for level, name in log_levels.items %}
<a class="dropdown-item d-flex justify-content-between" href="{% url 'extras:script_result' job_pk=job.pk %}?log_threshold={{ level }}"> <a class="dropdown-item d-flex justify-content-between" href="{% url 'extras:script_result' job_pk=job.pk %}?log_threshold={{ level }}">
{{ name }} {{ name }}{% if forloop.first %} ({% trans "All" %}){% endif %}
{% if level == log_threshold %}<span class="badge bg-green ms-auto"></span>{% endif %} {% if level == log_threshold %}<span class="badge bg-green ms-auto"></span>{% endif %}
</a> </a>
{% endfor %} {% endfor %}

View File

@ -1,7 +1,7 @@
{% load i18n %} {% load i18n %}
{% load navigation %} {% load navigation %}
{% if 'help-center' in settings.RELEASE.features %} {% if settings.RELEASE.features.help_center %}
{# Help center control #} {# Help center control #}
<a href="#" class="nav-link px-1" aria-label="{% trans "Help center" %}"> <a href="#" class="nav-link px-1" aria-label="{% trans "Help center" %}">
<i class="mdi mdi-forum-outline"></i> <i class="mdi mdi-forum-outline"></i>

View File

@ -29,7 +29,7 @@
<th scope="row"><i class="mdi mdi-harddisk"></i> {% trans "Size" %}</th> <th scope="row"><i class="mdi mdi-harddisk"></i> {% trans "Size" %}</th>
<td> <td>
{% if object.size %} {% if object.size %}
{{ object.size }} {% trans "GB" context "Abbreviation for gigabyte" %} {{ object.size|humanize_megabytes }}
{% else %} {% else %}
{{ ''|placeholder }} {{ ''|placeholder }}
{% endif %} {% endif %}

133
netbox/utilities/jobs.py Normal file
View File

@ -0,0 +1,133 @@
import logging
from abc import ABC, abstractmethod
from datetime import timedelta
from django.utils.functional import classproperty
from django_pglocks import advisory_lock
from rq.timeouts import JobTimeoutException
from core.choices import JobStatusChoices
from core.models import Job, ObjectType
from netbox.constants import ADVISORY_LOCK_KEYS
__all__ = (
'JobRunner',
)
class JobRunner(ABC):
"""
Background Job helper class.
This class handles the execution of a background job. It is responsible for maintaining its state, reporting errors,
and scheduling recurring jobs.
"""
class Meta:
pass
def __init__(self, job):
"""
Args:
job: The specific `Job` this `JobRunner` is executing.
"""
self.job = job
@classproperty
def name(cls):
return getattr(cls.Meta, 'name', cls.__name__)
@abstractmethod
def run(self, *args, **kwargs):
"""
Run the job.
A `JobRunner` class needs to implement this method to execute all commands of the job.
"""
pass
@classmethod
def handle(cls, job, *args, **kwargs):
"""
Handle the execution of a `Job`.
This method is called by the Job Scheduler to handle the execution of all job commands. It will maintain the
job's metadata and handle errors. For periodic jobs, a new job is automatically scheduled using its `interval`.
"""
try:
job.start()
cls(job).run(*args, **kwargs)
job.terminate()
except Exception as e:
job.terminate(status=JobStatusChoices.STATUS_ERRORED, error=repr(e))
if type(e) is JobTimeoutException:
logging.error(e)
# If the executed job is a periodic job, schedule its next execution at the specified interval.
finally:
if job.interval:
new_scheduled_time = (job.scheduled or job.started) + timedelta(minutes=job.interval)
cls.enqueue(
instance=job.object,
user=job.user,
schedule_at=new_scheduled_time,
interval=job.interval,
**kwargs,
)
@classmethod
def get_jobs(cls, instance=None):
"""
Get all jobs of this `JobRunner` related to a specific instance.
"""
jobs = Job.objects.filter(name=cls.name)
if instance:
object_type = ObjectType.objects.get_for_model(instance, for_concrete_model=False)
jobs = jobs.filter(
object_type=object_type,
object_id=instance.pk,
)
return jobs
@classmethod
def enqueue(cls, *args, **kwargs):
"""
Enqueue a new `Job`.
This method is a wrapper of `Job.enqueue()` using `handle()` as function callback. See its documentation for
parameters.
"""
return Job.enqueue(cls.handle, name=cls.name, *args, **kwargs)
@classmethod
@advisory_lock(ADVISORY_LOCK_KEYS['job-schedules'])
def enqueue_once(cls, instance=None, schedule_at=None, interval=None, *args, **kwargs):
"""
Enqueue a new `Job` once, i.e. skip duplicate jobs.
Like `enqueue()`, this method adds a new `Job` to the job queue. However, if there's already a job of this
class scheduled for `instance`, the existing job will be updated if necessary. This ensures that a particular
schedule is only set up once at any given time, i.e. multiple calls to this method are idempotent.
Note that this does not forbid running additional jobs with the `enqueue()` method, e.g. to schedule an
immediate synchronization job in addition to a periodic synchronization schedule.
For additional parameters see `enqueue()`.
Args:
instance: The NetBox object to which this job pertains (optional)
schedule_at: Schedule the job to be executed at the passed date and time
interval: Recurrence interval (in minutes)
"""
job = cls.get_jobs(instance).filter(status__in=JobStatusChoices.ENQUEUED_STATE_CHOICES).first()
if job:
# If the job parameters haven't changed, don't schedule a new job and keep the current schedule. Otherwise,
# delete the existing job and schedule a new job instead.
if (schedule_at and job.scheduled == schedule_at) and (job.interval == interval):
return job
job.delete()
return cls.enqueue(instance=instance, schedule_at=schedule_at, interval=interval, *args, **kwargs)

View File

@ -12,13 +12,25 @@ RELEASE_PATH = 'release.yaml'
LOCAL_RELEASE_PATH = 'local/release.yaml' LOCAL_RELEASE_PATH = 'local/release.yaml'
@dataclass
class FeatureSet:
"""
A map of all available NetBox features.
"""
# Commercial support is provided by NetBox Labs
commercial: bool = False
# Live help center is enabled
help_center: bool = False
@dataclass @dataclass
class ReleaseInfo: class ReleaseInfo:
version: str version: str
edition: str edition: str
published: Union[datetime.date, None] = None published: Union[datetime.date, None] = None
designation: Union[str, None] = None designation: Union[str, None] = None
features: List = field(default_factory=list) features: FeatureSet = field(default_factory=FeatureSet)
@property @property
def full_version(self): def full_version(self):

View File

@ -0,0 +1,129 @@
from datetime import timedelta
from django.test import TestCase
from django.utils import timezone
from django_rq import get_queue
from ..jobs import *
from core.models import Job
from core.choices import JobStatusChoices
class TestJobRunner(JobRunner):
def run(self, *args, **kwargs):
pass
class JobRunnerTestCase(TestCase):
def tearDown(self):
super().tearDown()
# Clear all queues after running each test
get_queue('default').connection.flushall()
get_queue('high').connection.flushall()
get_queue('low').connection.flushall()
@staticmethod
def get_schedule_at(offset=1):
# Schedule jobs a week in advance to avoid accidentally running jobs on worker nodes used for testing.
return timezone.now() + timedelta(weeks=offset)
class JobRunnerTest(JobRunnerTestCase):
"""
Test internal logic of `JobRunner`.
"""
def test_name_default(self):
self.assertEqual(TestJobRunner.name, TestJobRunner.__name__)
def test_name_set(self):
class NamedJobRunner(TestJobRunner):
class Meta:
name = 'TestName'
self.assertEqual(NamedJobRunner.name, 'TestName')
def test_handle(self):
job = TestJobRunner.enqueue(immediate=True)
self.assertEqual(job.status, JobStatusChoices.STATUS_COMPLETED)
def test_handle_errored(self):
class ErroredJobRunner(TestJobRunner):
EXP = Exception('Test error')
def run(self, *args, **kwargs):
raise self.EXP
job = ErroredJobRunner.enqueue(immediate=True)
self.assertEqual(job.status, JobStatusChoices.STATUS_ERRORED)
self.assertEqual(job.error, repr(ErroredJobRunner.EXP))
class EnqueueTest(JobRunnerTestCase):
"""
Test enqueuing of `JobRunner`.
"""
def test_enqueue(self):
instance = Job()
for i in range(1, 3):
job = TestJobRunner.enqueue(instance, schedule_at=self.get_schedule_at())
self.assertIsInstance(job, Job)
self.assertEqual(TestJobRunner.get_jobs(instance).count(), i)
def test_enqueue_once(self):
job = TestJobRunner.enqueue_once(instance=Job(), schedule_at=self.get_schedule_at())
self.assertIsInstance(job, Job)
self.assertEqual(job.name, TestJobRunner.__name__)
def test_enqueue_once_twice_same(self):
instance = Job()
schedule_at = self.get_schedule_at()
job1 = TestJobRunner.enqueue_once(instance, schedule_at=schedule_at)
job2 = TestJobRunner.enqueue_once(instance, schedule_at=schedule_at)
self.assertEqual(job1, job2)
self.assertEqual(TestJobRunner.get_jobs(instance).count(), 1)
def test_enqueue_once_twice_different_schedule_at(self):
instance = Job()
job1 = TestJobRunner.enqueue_once(instance, schedule_at=self.get_schedule_at())
job2 = TestJobRunner.enqueue_once(instance, schedule_at=self.get_schedule_at(2))
self.assertNotEqual(job1, job2)
self.assertRaises(Job.DoesNotExist, job1.refresh_from_db)
self.assertEqual(TestJobRunner.get_jobs(instance).count(), 1)
def test_enqueue_once_twice_different_interval(self):
instance = Job()
schedule_at = self.get_schedule_at()
job1 = TestJobRunner.enqueue_once(instance, schedule_at=schedule_at)
job2 = TestJobRunner.enqueue_once(instance, schedule_at=schedule_at, interval=60)
self.assertNotEqual(job1, job2)
self.assertEqual(job1.interval, None)
self.assertEqual(job2.interval, 60)
self.assertRaises(Job.DoesNotExist, job1.refresh_from_db)
self.assertEqual(TestJobRunner.get_jobs(instance).count(), 1)
def test_enqueue_once_with_enqueue(self):
instance = Job()
job1 = TestJobRunner.enqueue_once(instance, schedule_at=self.get_schedule_at(2))
job2 = TestJobRunner.enqueue(instance, schedule_at=self.get_schedule_at())
self.assertNotEqual(job1, job2)
self.assertEqual(TestJobRunner.get_jobs(instance).count(), 2)
def test_enqueue_once_after_enqueue(self):
instance = Job()
job1 = TestJobRunner.enqueue(instance, schedule_at=self.get_schedule_at())
job2 = TestJobRunner.enqueue_once(instance, schedule_at=self.get_schedule_at(2))
self.assertNotEqual(job1, job2)
self.assertRaises(Job.DoesNotExist, job1.refresh_from_db)
self.assertEqual(TestJobRunner.get_jobs(instance).count(), 1)

View File

@ -1,23 +0,0 @@
# Generated by Django 5.0.6 on 2024-06-06 17:46
from django.db import migrations
from django.db.models import F
def convert_disk_size(apps, schema_editor):
VirtualMachine = apps.get_model('virtualization', 'VirtualMachine')
VirtualMachine.objects.filter(disk__isnull=False).update(disk=F('disk') * 1000)
class Migration(migrations.Migration):
dependencies = [
('virtualization', '0038_virtualdisk'),
]
operations = [
migrations.RunPython(
code=convert_disk_size,
reverse_code=migrations.RunPython.noop
),
]

View File

@ -1,12 +1,10 @@
# Generated by Django 5.0.6 on 2024-06-04 17:09
from django.db import migrations, models from django.db import migrations, models
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [ dependencies = [
('virtualization', '0039_convert_disk_size'), ('virtualization', '0038_virtualdisk'),
] ]
operations = [ operations = [

View File

@ -0,0 +1,31 @@
from django.db import migrations
from django.db.models import F, Sum
def convert_disk_size(apps, schema_editor):
VirtualMachine = apps.get_model('virtualization', 'VirtualMachine')
VirtualMachine.objects.filter(disk__isnull=False).update(disk=F('disk') * 1000)
VirtualDisk = apps.get_model('virtualization', 'VirtualDisk')
VirtualDisk.objects.filter(size__isnull=False).update(size=F('size') * 1000)
# Recalculate disk size on all VMs with virtual disks
id_list = VirtualDisk.objects.values_list('virtual_machine_id').distinct()
virtual_machines = VirtualMachine.objects.filter(id__in=id_list)
for vm in virtual_machines:
vm.disk = vm.virtualdisks.aggregate(Sum('size', default=0))['size__sum']
VirtualMachine.objects.bulk_update(virtual_machines, fields=['disk'])
class Migration(migrations.Migration):
dependencies = [
('virtualization', '0039_virtualmachine_serial_number'),
]
operations = [
migrations.RunPython(
code=convert_disk_size,
reverse_code=migrations.RunPython.noop
),
]

View File

@ -431,7 +431,7 @@ class VMInterface(ComponentModel, BaseInterface, TrackingModelMixin):
class VirtualDisk(ComponentModel, TrackingModelMixin): class VirtualDisk(ComponentModel, TrackingModelMixin):
size = models.PositiveIntegerField( size = models.PositiveIntegerField(
verbose_name=_('size (GB)'), verbose_name=_('size (MB)'),
) )
class Meta(ComponentModel.Meta): class Meta(ComponentModel.Meta):

View File

@ -194,6 +194,9 @@ class VirtualDiskTable(NetBoxTable):
verbose_name=_('Name'), verbose_name=_('Name'),
linkify=True linkify=True
) )
size = tables.Column(
verbose_name=_('Size')
)
tags = columns.TagColumn( tags = columns.TagColumn(
url_name='virtualization:virtualdisk_list' url_name='virtualization:virtualdisk_list'
) )
@ -208,6 +211,9 @@ class VirtualDiskTable(NetBoxTable):
'data-name': lambda record: record.name, 'data-name': lambda record: record.name,
} }
def render_size(self, value):
return humanize_megabytes(value)
class VirtualMachineVirtualDiskTable(VirtualDiskTable): class VirtualMachineVirtualDiskTable(VirtualDiskTable):
actions = columns.ActionsColumn( actions = columns.ActionsColumn(