279 Commits

Author SHA1 Message Date
Twan Kamans
da17b7be7d Merge pull request #173 from TheNetworkGuy/develop
Build and Push Docker Image / test_quality (push) Failing after 1m5s
Build and Push Docker Image / test_code (push) Failing after 1m14s
Build and Push Docker Image / build (push) Failing after 2m47s
Package code to main
2026-02-27 16:44:01 +01:00
Twan Kamans
4b54d93c6f Merge pull request #172 from TheNetworkGuy/bug/config-file-path
🐛 Adjusting reading of config.py behavior to support legacy usage
2026-02-27 15:47:16 +01:00
Wouter de Bruijn
8073cae46a 🔥 Removed special case option because of unlikely scenario 2026-02-27 15:36:39 +01:00
Wouter de Bruijn
9da113ac60 🚧 Added check for reading config file from netbox-zabbix-sync.py as root dir 2026-02-27 13:43:56 +01:00
Wouter de Bruijn
473dd1dcc1 🐛 Updated script_dir path for new netbox_zabbix_sync parent folder 2026-02-27 13:43:18 +01:00
Wouter de Bruijn
14e68c34ea 🐛 Changed end of line to LF 2026-02-27 13:42:31 +01:00
Twan Kamans
4d8cd6a81d Merge pull request #166 from TheNetworkGuy/feature/PIPreperations
♻️ Moved project code for Python bundling as preperation of PIP package
2026-02-25 14:34:02 +01:00
Wouter de Bruijn
0874bc9275 🔥 Removed load_config method 2026-02-25 14:21:32 +01:00
Wouter de Bruijn
4d0c2a42e2 🔥 Removed _config method 2026-02-25 14:20:25 +01:00
TheNetworkGuy
d4f1a2a572 Added #112 2026-02-25 12:55:50 +00:00
TheNetworkGuy
f6b23b4bcd Adds #151 2026-02-25 09:13:18 +00:00
TheNetworkGuy
e7de68c7c3 Fixes #152 2026-02-25 09:08:08 +00:00
TheNetworkGuy
0a37ff491c Changed order of printing Netbox version. It makes sense to print it when a succesfull session with authentication is present. This fixes a bug where the version is printed even though the token might be invalid which might cause confusion. 2026-02-25 09:05:21 +01:00
TheNetworkGuy
9ec8bb3c2c Fixed some Ruff linting and modified the error message for start function without a proper netbox / zabbix connection. 2026-02-23 13:49:11 +00:00
TheNetworkGuy
7b83d768d0 Modified error message for sync function that does not have a valid netbox or zabbix connection. 2026-02-23 13:46:35 +00:00
TheNetworkGuy
ed63c3e33b Added missing function to testing suite which was deleted in a previous commit. 2026-02-23 13:17:32 +00:00
TheNetworkGuy
b7b399444c Replaced raise with return False statement and added return True at the end of the function. 2026-02-23 14:04:56 +01:00
TheNetworkGuy
0b92586057 Removed unused code for testing UserMacroSync logic that contained a raise exception. 2026-02-23 14:00:13 +01:00
TheNetworkGuy
dc0a1f9122 Modified logging messages which contain device / VM and renamed this for consistent logging to "Host".
Moved logging function from global to main() for CLI
2026-02-23 13:47:21 +01:00
TheNetworkGuy
e3487378c1 Fixed check for future Zabbix versions. 2026-02-23 13:28:43 +01:00
TheNetworkGuy
449704156c Added debug line for showing Zabbix version and modified check for Zabbix versions above 6.x.x for proxy configuration 2026-02-23 13:23:27 +01:00
TheNetworkGuy
489a70b703 Modified API call to sites.all for testing the API connection to devices.count. 2026-02-23 13:15:15 +01:00
TheNetworkGuy
4185aaba24 Made the token checks more consistent with function variables and updated the tests to better reflect the use of a proper V2 token. 2026-02-23 11:59:13 +00:00
TheNetworkGuy
a29f51f314 Updated and simplified Netbox token testing 2026-02-23 11:39:25 +00:00
TheNetworkGuy
08519c7433 Added Netbox token check to support #159 2026-02-23 11:33:15 +00:00
TheNetworkGuy
7cfed2ec76 Added warning message for Token v2 with Netbox 4.5 or higher. This is according to Netbox recommendations and to warn our users. 2026-02-23 10:35:41 +00:00
TheNetworkGuy
d5e3199e92 Implemented central configuration and config path that is configurable as path. Updated tests to use self.config instead on re-initializing config. 2026-02-19 16:17:58 +00:00
TheNetworkGuy
f7d0989320 Added check for when a non-primary cluster member is synced 2026-02-19 13:04:32 +00:00
TheNetworkGuy
3be3cdc8ef Fixed Ruff linting 2026-02-19 12:40:50 +00:00
TheNetworkGuy
a4d5fda5e3 Added VM tests and tag tests 2026-02-19 12:38:29 +00:00
TheNetworkGuy
02a5617bc8 Fixed some hostgroup tests and added 4 new tests 2026-02-19 12:08:01 +00:00
TheNetworkGuy
434f0c9e68 Added new core tests dedicated towards status conflicts / changes and Template sourcing 2026-02-19 11:47:50 +00:00
TheNetworkGuy
c00ec4de31 Fixed several ruff and ty checks. 2026-02-18 14:10:59 +00:00
TheNetworkGuy
dfba6f4714 Renamed NB API import, removed unused sys import, added error when ZBX token and password are both used, revamped the core testing file and added useful tests such as device clustering and a base for future device testing. 2026-02-18 13:57:37 +00:00
TheNetworkGuy
223a27f47c Changed sync function to class 2026-02-17 15:45:43 +00:00
TheNetworkGuy
d55fc0a4e7 Fixed ruff formatting 2026-02-16 13:29:48 +00:00
TheNetworkGuy
39f3c57cca Renamed module/config.py file to settings.py to avoid confusion with the main config.py file 2026-02-16 13:28:18 +00:00
Wouter de Bruijn
79396242fe 👷 Fixed CI publish 2026-02-13 15:42:45 +01:00
TheNetworkGuy
2028b7b8aa Reformatted file for ruff check 2026-02-12 22:27:22 +00:00
TheNetworkGuy
ebbebfa17f Adds tests for new core module 2026-02-12 22:24:06 +00:00
TheNetworkGuy
de02d257f7 Fixed file for linting issues. 2026-02-12 16:27:18 +00:00
TheNetworkGuy
b3f02dc028 Renamed run_sync function to sync and imported it at package level for easier imports. 2026-02-12 17:22:41 +01:00
Wouter de Bruijn
2b251b8f68 👷 Changed build to only run once for Python 3.12 2026-02-12 16:30:54 +01:00
Wouter de Bruijn
37257074bc 🔧 Updated lockfile 2026-02-12 16:25:20 +01:00
Wouter de Bruijn
0aa019e104 🔧 Added pypi publishing step 2026-02-12 16:25:11 +01:00
Wouter de Bruijn
6d0b031016 🔧 Adjusted build to use dynamic git tag version 2026-02-12 15:42:13 +01:00
Wouter de Bruijn
ce7ad878a2 🔧 Added cli command on package install 2026-02-12 15:41:59 +01:00
Wouter de Bruijn
e2b5c853a4 🙈 Ignored _version.py file 2026-02-12 15:41:19 +01:00
Wouter de Bruijn
3209e7077c 🔥 Removed saving of None return value 2026-02-12 15:25:35 +01:00
Wouter de Bruijn
14c0b9a479 Updated patch targets for new module structure 2026-02-12 15:22:39 +01:00
Wouter de Bruijn
22ebeaec1b 🐛 Fixed exclusion of all config.py files instead of only root file 2026-02-12 15:20:47 +01:00
Wouter de Bruijn
b2d021e849 👷 Added python packaging build step in GitHub actions 2026-02-12 15:17:50 +01:00
Wouter de Bruijn
f302cef05c ♻️ Importing cli parser from netbox_zabbix_sync module 2026-02-12 15:17:50 +01:00
Wouter de Bruijn
414f272d75 🙈 Ignored build files 2026-02-12 15:17:35 +01:00
Wouter de Bruijn
a8146b1e05 ♻️ Moved sourcecode into netbox_zabbix_sync module 2026-02-12 15:17:34 +01:00
TheNetworkGuy
6697311f8d Splitted core code from calling the script directly 2026-02-12 15:17:34 +01:00
Wouter de Bruijn
811e1eaa69 🔀 Merge pull request #165 from TheNetworkGuy/remove-pylint-annotations
💡 Removed old pylint annotations
2026-02-12 10:31:07 +01:00
Twan Kamans
e15919cfdd Merge pull request #164 from TheNetworkGuy/test-linting-exceptions
🔧 Specifically ignore assertion in tests instead of entire codebase
2026-02-12 10:26:10 +01:00
Wouter de Bruijn
6d715e6835 💡 Removed old pylint annotations 2026-02-12 10:25:35 +01:00
Twan Kamans
ab761f6b07 Merge pull request #163 from TheNetworkGuy/devcontainer-uv-environment
🔧 Updated post create command to fully use uv environment
2026-02-12 10:24:51 +01:00
Wouter de Bruijn
a151771002 🔒️ Switched to installation of locked dependencies 2026-02-12 10:02:45 +01:00
Wouter de Bruijn
df00114e3a 🔧 Removed pip installation is favor of installing uv and synchronizing virtual environment 2026-02-12 09:57:15 +01:00
Wouter de Bruijn
623994c55f 🔧 Specifically ignore assertion in tests instead of entire codebase 2026-02-12 09:24:32 +01:00
Twan Kamans
5c04757f4b Merge pull request #162 from TheNetworkGuy/develop
Build and Push Docker Image / test_quality (push) Successful in 1m22s
Build and Push Docker Image / test_code (push) Failing after 26s
Build and Push Docker Image / build (push) Failing after 21s
Fixes code to be compatible with ruff
2026-02-11 17:16:43 +01:00
TheNetworkGuy
e5d4bb64f0 Fixed linting on several files 2026-02-11 15:51:35 +00:00
TheNetworkGuy
3227bb3165 Fixed formatting, fixed tests for type checker 2026-02-11 15:30:53 +00:00
TheNetworkGuy
d53cc5e7d4 Added link to wiki in readme 2026-02-11 14:39:29 +00:00
TheNetworkGuy
8c5cdc77d7 Removed slowly from readme banner. We are not going to move this documentation slow, right? And added a link directly to the wiki. 2026-02-11 14:39:18 +00:00
TheNetworkGuy
2ea211b5dd Adds a little banner on the readme pointing towards the Wiki documentation 2026-02-11 14:36:35 +00:00
TheNetworkGuy
9212f486bf Updated main script with updated function names and to be valid code for ruff 2026-02-11 14:31:20 +00:00
TheNetworkGuy
18d67d5c2b Updated several modules to be valid code for ruff 2026-02-11 14:30:46 +00:00
TheNetworkGuy
2e2939ce55 Updated test_tools for ruff 2026-02-11 14:30:18 +00:00
TheNetworkGuy
5255984f80 Updated tests due to ruff checks failing 2026-02-11 14:30:03 +00:00
TheNetworkGuy
d32540d0e1 Updated devcontainer and added assertion exception for pytest code. 2026-02-11 14:29:27 +00:00
Twan Kamans
a80dc9fc2b Merge pull request #161 from TheNetworkGuy/main
Fix develop
2026-02-11 11:58:25 +01:00
Twan Kamans
f7dd8523a6 Merge pull request #160 from TheNetworkGuy/uv-project
Build and Push Docker Image / test_quality (push) Failing after 1m1s
Build and Push Docker Image / test_code (push) Successful in 1m1s
Build and Push Docker Image / build (push) Failing after 3m38s
🔧 Switched to astral.sh stack for project management, linting and formatting.
2026-02-11 11:10:29 +01:00
Twan Kamans
313158ea73 Merge pull request #154 from TheNetworkGuy/enforce-tag-list-order
🐛 Enforce tag list order before comparison
2026-02-11 10:50:44 +01:00
Wouter de Bruijn
64c10726c7 Added pytest-cov dev dependency 2026-02-02 18:51:38 +01:00
Wouter de Bruijn
cf4c4c5620 👷 Switched to astral.sh stack for linting and formatting 2026-02-02 18:48:06 +01:00
Wouter de Bruijn
6b29a70aea 👷 Updated action to use uv 2026-02-02 18:48:00 +01:00
Wouter de Bruijn
49c6b4644c 🔧 Switched to uv for project management 2026-02-02 18:47:50 +01:00
Wouter de Bruijn
fd66a4c943 🚨 Fixed Unnecessary ellipsis constant 2025-12-19 09:50:01 +01:00
Wouter de Bruijn
fdaeb79d4d 🐛 Removed deprecation decorator because of unavailability in Python 3.12 2025-12-19 09:47:47 +01:00
Wouter de Bruijn
765b4713a6 🐛 Changed tag sorting to tag name and value 2025-12-19 09:24:14 +01:00
Wouter de Bruijn
c275e08953 🐛 Enforce tag list order before comparison 2025-12-02 10:21:28 +01:00
Raymond Kuiper
9cc229c2f7 Merge pull request #148 from TheNetworkGuy/develop
Build and Push Docker Image / test_quality (push) Has been cancelled
Build and Push Docker Image / test_code (push) Has been cancelled
Build and Push Docker Image / build (push) Has been cancelled
Merge latest development code base to main
2025-10-16 11:45:04 +02:00
Raymond Kuiper
40592a589d Merge pull request #145 from retigra/proxy-by-cf
Allow configuration of proxies based on custom fields and added support for more types of custom fields.
2025-10-16 11:15:56 +02:00
Wouter de Bruijn
8197f41788 🎨 Minor formatting cleanup 2025-10-15 17:27:07 +02:00
Wouter de Bruijn
efb42916fd ✏️ Minor typo cleanup 2025-10-15 17:26:54 +02:00
Raymond Kuiper
d75b0c2728 Merge pull request #143 from retigra/inherent-site-properties
Inherent site properties
2025-09-28 19:18:08 +02:00
Twan Kamans
2fa05ffe92 Merge pull request #146 from TheNetworkGuy/develop
Build and Push Docker Image / test_quality (push) Has been cancelled
Build and Push Docker Image / test_code (push) Has been cancelled
Build and Push Docker Image / build (push) Has been cancelled
Support for 7.4
2025-09-24 15:10:50 +02:00
TheNetworkGuy
b81d4abfcd Add support for Zabbix 7.4 2025-09-23 12:47:05 +02:00
Wouter de Bruijn
047fb33332 🚑 Fixed random space on line 2 2025-09-12 16:47:57 +02:00
Wouter de Bruijn
bf512ada0b 💄 Codebase formatting 2025-09-12 16:45:03 +02:00
Wouter de Bruijn
337184159b 🐛 Fixed key/value check for proxy assignment 2025-09-12 16:44:04 +02:00
Raymond Kuiper
b9cf7b5bbe Merge pull request #5 from retigra/develop
Develop
2025-09-12 15:40:33 +02:00
Raymond Kuiper
58365f5228 Merge pull request #4 from retigra/proxy-by-cf
Merge latest features
2025-09-12 14:42:28 +02:00
Raymond Kuiper
37774cfec3 More linting fixes 2025-09-12 14:40:53 +02:00
Raymond Kuiper
c27505b927 corrected linting errors and a minor bug in cf_to_string 2025-09-12 14:39:11 +02:00
Raymond Kuiper
bc12064b6a corrected linting error 2025-09-12 14:27:06 +02:00
Raymond Kuiper
422d343c1f * Added support for object and select custom fields in host groups and proxy config.
* Corrected error when `full_proxy_sync` was not set and a host no longer uses a proxy.
2025-09-12 14:11:38 +02:00
Wouter de Bruijn
123b243f56 ♻️ Improved Zabbix version check for proxy group insertion 2025-09-12 10:48:29 +02:00
Raymond Kuiper
7d9bb9f637 Refactoring 2025-09-12 10:21:42 +02:00
Raymond Kuiper
17ba97be45 Minor update on README 2025-09-11 17:26:05 +02:00
Raymond Kuiper
5810cbe621 First working version of proxy by custom fields 2025-09-11 17:20:05 +02:00
Raymond Kuiper
b5d7596de7 Reverted device inventory map to work with default configuration 2025-09-09 10:00:53 +02:00
Raymond Kuiper
18f52c1d40 Added documentation for extended site properties 2025-09-09 09:36:58 +02:00
Raymond Kuiper
79e82c4365 Added option to extend site information for devices and vms. 2025-09-08 14:47:48 +02:00
Raymond Kuiper
9259e73617 Added option to extend site information for devices and vms. 2025-09-08 14:44:46 +02:00
Raymond Kuiper
c58a3e8dd5 Update README.md
Replaced dependency pyzabbix with zabbix-utils as this was changed a few months ago.
2025-06-26 09:48:25 +02:00
Raymond Kuiper
3e1657e575 Merge pull request #140 from retigra/hostgroup_static_text
 Hostgroup static text
2025-06-25 17:21:31 +02:00
Raymond Kuiper
161b310ba3 corrected linting error 2025-06-25 17:07:46 +02:00
Raymond Kuiper
cf2c841d23 Merge branch 'develop' into hostgroup_static_text 2025-06-25 17:06:37 +02:00
Raymond Kuiper
b258b02b91 Merge pull request #138 from retigra/issue-136
 Logging improvements
2025-06-25 17:00:58 +02:00
Raymond Kuiper
e82c098e26 corrected linting error 2025-06-25 17:00:04 +02:00
Raymond Kuiper
3910e0de2d Updated docs 2025-06-25 16:54:12 +02:00
Raymond Kuiper
98c13919c5 Added support for hardcoded strings in hostgroups 2025-06-25 16:50:17 +02:00
Wouter de Bruijn
e718560689 🚨 Line length fixes 2025-06-25 16:37:44 +02:00
Wouter de Bruijn
57c7f83e6a 🔊 Removed f-strings usage from logs 2025-06-25 13:56:41 +02:00
Raymond Kuiper
e0ec3c0632 updated usermacro test for new loglevels 2025-06-25 10:54:39 +02:00
Raymond Kuiper
e4a1a17ded Logging improvements 2025-06-25 10:43:47 +02:00
Twan Kamans
f15e53185b Merge pull request #137 from TheNetworkGuy/hostgroup_fixes2
Fixes bug for hostgroups and removed default values for hostgroups
2025-06-24 21:44:34 +02:00
TheNetworkGuy
5923682d48 Fixes workflows to be executed 2 times. 2025-06-24 21:42:46 +02:00
TheNetworkGuy
29a54e5a86 Removed unused hostgroup import since the hostgroup generate function function has been moved to devices.py 2025-06-24 21:29:36 +02:00
TheNetworkGuy
4a53b53789 Removed previous patch for Nonetype hostgroups and made a proper fix by refactoring the set_hostgroup() function and removing it from virtual_machines.py 2025-06-24 21:28:32 +02:00
TheNetworkGuy
6d4f1ac0a5 Added hostgroup tests 2025-06-24 21:28:13 +02:00
TheNetworkGuy
a522c98929 Removed default None for hg_format making a hostgroup format input required. 2025-06-24 20:50:04 +02:00
TheNetworkGuy
1de0b0781b Removed default for hostgroups and fixed bug for hostgroup attributes which do not exist 2025-06-24 20:44:59 +02:00
Raymond Kuiper
1cf24fbcb5 Merge pull request #135 from retigra/issue-131
🐛 Fixes for issue #131
2025-06-24 17:52:13 +02:00
Raymond Kuiper
c2b25e0cd2 fixed linting 2025-06-24 17:35:10 +02:00
Raymond Kuiper
9933c97e94 improved debug logging 2025-06-24 17:28:57 +02:00
Raymond Kuiper
435fd1fa78 Fixed issues with tag mapping 2025-06-24 17:09:23 +02:00
Raymond Kuiper
099ebcace5 Merge pull request #134 from retigra/issue-130
🐛 Fixes for issue #130
2025-06-24 16:02:36 +02:00
Raymond Kuiper
906c719863 corrected linting errors 2025-06-24 15:16:39 +02:00
Raymond Kuiper
2a3d586302 corrected typo 2025-06-24 15:06:52 +02:00
Raymond Kuiper
753633e7d2 Added checks for empty list of hostgroups, improved some logging 2025-06-24 15:01:45 +02:00
Raymond Kuiper
de82d5ac71 Remove duplicates from the list of hostgroups 2025-06-24 13:52:43 +02:00
Raymond Kuiper
9912f24450 Merge pull request #3 from TheNetworkGuy/main
Sync with upstream
2025-06-24 11:58:31 +02:00
Twan Kamans
d056a20de2 Merge pull request #128 from TheNetworkGuy/develop
Fixes #127, implements some tests to prevent hostgroup failures.
2025-06-17 09:06:30 +02:00
TheNetworkGuy
a57b51870f Merge branch 'develop' of github.com:TheNetworkGuy/netbox-zabbix-sync into develop 2025-06-17 08:47:49 +02:00
TheNetworkGuy
dbc7acaf98 Added hostgroup tests, set the test coverage to 70%, added test packages to devcontainer 2025-06-16 18:40:06 +00:00
TheNetworkGuy
87b33706c0 Updated README with cluster_type 2025-06-16 16:07:38 +00:00
TheNetworkGuy
affd4c6998 Fixes #127 2025-06-16 16:03:53 +00:00
Twan Kamans
22982c3607 Merge pull request #126 from TheNetworkGuy/develop
Fixes bug in which config.py was not detected by the script
2025-06-16 17:21:03 +02:00
TheNetworkGuy
dec2cf6996 Fixed bug in which custom config.py module was not accessed 2025-06-16 14:04:10 +00:00
TheNetworkGuy
940f2d6afb Re-added some git logic to the pipeline which was lost during development 2025-06-16 11:13:36 +00:00
TheNetworkGuy
d79f96a5b4 Add unittests to build process 2025-06-16 10:03:58 +00:00
Twan Kamans
2f40ec467b Merge pull request #125 from TheNetworkGuy/develop
Fixes image push pipeline
2025-06-16 11:28:26 +02:00
TheNetworkGuy
e0d28633c3 Fixes image push pipeline 2025-06-16 11:27:38 +02:00
Twan Kamans
0a20e270ed Merge pull request #123 from TheNetworkGuy/develop
Adds unit tests, modular config with default config fallback, ARM docker image support, mapping of usermacros, mapping of tags, inventory sync for VMs, partial support for multiple hostgroups and fixed several bugs.
2025-06-16 11:22:06 +02:00
TheNetworkGuy
a5be9538d9 Made the pytest file a bit cleaner and removed a redundant step 2025-06-16 11:15:52 +02:00
Raymond Kuiper
b31e41ca6b Merge pull request #124 from retigra/additional-hostgroup-support
 Additional hostgroup support
2025-06-16 10:54:17 +02:00
Raymond Kuiper
ba530ecd58 corrected linting errors 2025-06-16 10:28:17 +02:00
Raymond Kuiper
a3259c4fe3 Merge branch 'develop' into additional-hostgroup-support 2025-06-16 10:06:47 +02:00
TheNetworkGuy
5e390396ba Fixed small typo 2025-06-14 23:16:07 +02:00
TheNetworkGuy
ee6d13bfdf Fixed line too long and updated readme 2025-06-14 20:17:57 +00:00
TheNetworkGuy
8fe7e5763b Added sanatizer function for log output. 2025-06-14 20:15:05 +00:00
Raymond Kuiper
a7a79ea81e updated README for multiple hostgoups 2025-06-13 15:56:21 +02:00
Raymond Kuiper
b62e8203b6 removed debug line 2025-06-13 15:47:31 +02:00
Raymond Kuiper
bfadd88542 perform hostgroup creation before consistency check 2025-06-13 10:49:40 +02:00
Raymond Kuiper
bd4d21c5d8 Hostgroup CF checks for VMs 2025-06-13 10:24:26 +02:00
TheNetworkGuy
148ce47c10 Set minimum coverage to 60 2025-06-12 20:25:54 +00:00
TheNetworkGuy
7969de50bf Adds coverage report to gitignore. Adds extra coverage test to workflow. 2025-06-12 20:24:29 +00:00
TheNetworkGuy
7394bf8d1d Fixed a bunch of typos (how did this happen!?!) 2025-06-12 19:24:04 +00:00
TheNetworkGuy
8ce2cab86f Fixed bug where sync.log was created in the modules directory 2025-06-12 18:35:56 +00:00
TheNetworkGuy
76723d2823 Updated Git workflow. Linter to Python 3.13, Image publisher will only execute when a commit is performend on main. 2025-06-12 13:51:59 +02:00
TheNetworkGuy
c58e5aba1e Fixed minor documentation mistake 2025-06-12 11:51:15 +00:00
TheNetworkGuy
baf23403a0 Updated documentation after fixing #111 2025-06-12 11:20:46 +00:00
TheNetworkGuy
3115eaa04e Fixed linter and test for config file 2025-06-12 11:14:15 +00:00
TheNetworkGuy
c8fda04ce8 Fixed config bug and #111 2025-06-12 11:08:21 +00:00
TheNetworkGuy
7b8827fa94 Added Zabbix logout 2025-06-12 10:56:30 +02:00
TheNetworkGuy
b705e1341f Fixed publish image workflow 2025-06-11 20:15:02 +00:00
TheNetworkGuy
8df17f208c Fixed small typo in Readme, Updated zabbix-utils, Added Devcontainer, Fixed logging and class description in usermacros module, fixed Zabbix consistencycheck for Usermacros and added unit tests for usermacros. 2025-06-11 20:09:53 +00:00
Twan Kamans
22d735dd82 Merge pull request #121 from TheNetworkGuy/unittesting
Modular config, Github unittesting
2025-06-08 22:14:38 +02:00
TheNetworkGuy
a325863aec Fixed several config errors, double exception imports, Linter stuff and edited test for new device_inventory_map key 2025-06-08 22:13:36 +02:00
TheNetworkGuy
9e1a90833d Added new config parameters to base template 2025-06-08 21:45:45 +02:00
Twan Kamans
45e633b5ed Merge branch 'develop' into unittesting 2025-06-08 21:33:21 +02:00
Raymond Kuiper
298e6c4370 support multiple hostgroups for vm 2025-06-05 11:53:17 +02:00
Raymond Kuiper
77b0798b65 Added verify of vm_hostgroup_format (moved fucntion to tools.py) 2025-06-05 11:39:42 +02:00
Raymond Kuiper
27ee4c341f Fixed multiple hostgroups for devices 2025-06-04 15:18:27 +02:00
Raymond Kuiper
f7eb47a8a8 removed scratch file 2025-06-04 14:23:46 +02:00
Raymond Kuiper
bc53737e02 first supoport of multiple hostgroups 2025-06-04 14:23:01 +02:00
TheNetworkGuy
539ad64c8d Adds 2 new tests for virtual chassis 2025-04-28 22:49:04 +02:00
TheNetworkGuy
bbe28d9705 Added all default config statements and added warning to any curious users. 2025-04-28 22:31:36 +02:00
TheNetworkGuy
2998dfde54 Specifiek Python version in pipeline test step 2025-04-28 22:21:30 +02:00
TheNetworkGuy
d60eb1cb2d Removed python test files for linter. 2025-04-28 22:18:59 +02:00
TheNetworkGuy
98edf0ad99 Adjusted ENV prefix, fixed several linter errors in new tests 2025-04-28 17:23:51 +02:00
TheNetworkGuy
772fef0930 Added prefix for environment variables 2025-04-28 15:57:11 +02:00
TheNetworkGuy
68cf28565d Fixed some pipeline stuff 2025-04-28 15:47:37 +02:00
TheNetworkGuy
0c715d4f96 Fixed some basic Flake8 errors, added Pylinter exception, Fixed some minor logging bugs. 2025-04-28 15:44:45 +02:00
TheNetworkGuy
819126ce36 Added tests for config file, added logger for config file 2025-04-28 15:35:51 +02:00
TheNetworkGuy
04a610cf84 Fixed some minor Flake8 errors 2025-04-28 15:10:48 +02:00
TheNetworkGuy
e91eecffaa Fixed on statement on new testcode. 2025-04-28 14:58:38 +02:00
TheNetworkGuy
eb307337f6 Removed YAML config logic, added python config logic with default fallback. Added ENV variable support for config parameters. 2025-04-28 14:50:52 +02:00
TheNetworkGuy
5fd89a1f8a Added .vscode as exception to gitignore 2025-04-28 13:32:28 +02:00
TheNetworkGuy
cb0500d0c0 Fixed test layout and added pipeline step to actually run tests 2025-04-28 10:47:52 +02:00
TheNetworkGuy
7383583c43 Adjusted Gitignore, added config module, adjusted requirements for YAML support, added first unittests 2025-04-25 14:43:35 +02:00
TheNetworkGuy
dad7d2911f Reverted previous work 2025-04-23 11:11:05 +02:00
TheNetworkGuy
4fd582970d Container statement removed, added logs output 2025-04-14 20:43:32 +02:00
TheNetworkGuy
ad2ace942a Increased start_period time of Netbox 2025-04-14 20:37:17 +02:00
TheNetworkGuy
989f6fa96e Moved compose override logic to infra folder 2025-04-14 20:36:52 +02:00
TheNetworkGuy
f303e7e01d Moved to compose v2 2025-04-14 20:27:44 +02:00
TheNetworkGuy
38d61dcde7 Removed sudo statement 2025-04-14 20:25:02 +02:00
TheNetworkGuy
feb719542d Added Netbox deployment config 2025-04-14 20:22:43 +02:00
TheNetworkGuy
ea5b7d3196 Added initial unittesting PoC to see if Docker and Python are working correctly 2025-04-14 20:13:15 +02:00
Twan Kamans
28193cc120 Merge pull request #106 from retigra/develop
🔊 Logging improvements
2025-04-14 19:04:00 +02:00
TheNetworkGuy
908e7eeda9 Added documentation line for unsupported Zabbix versions. 2025-04-14 16:35:09 +02:00
Raymond Kuiper
e9a86334d9 Merge pull request #2 from retigra/main
Updates to the dockerfile
2025-04-10 16:19:46 +02:00
Raymond Kuiper
2ea2edb6a6 Update Dockerfile 2025-04-10 16:13:37 +02:00
Raymond Kuiper
37b3bfc7fb Update Dockerfile 2025-04-10 16:05:34 +02:00
Raymond Kuiper
6abdac2eb4 Update Dockerfile 2025-04-10 16:01:53 +02:00
Raymond Kuiper
13fe406b63 Update Dockerfile 2025-04-10 16:00:56 +02:00
Raymond Kuiper
20a3c67fd4 Update Dockerfile 2025-04-10 15:37:57 +02:00
Raymond Kuiper
b56a4332b9 Update Dockerfile 2025-04-10 15:35:44 +02:00
Raymond Kuiper
73d34851fb Update Dockerfile 2025-04-10 15:34:50 +02:00
Raymond Kuiper
10313ef5cf Merge pull request #1 from retigra/develop
Develop
2025-04-09 16:09:01 +02:00
Raymond Kuiper
93c88333a6 Merge branch 'main' into develop 2025-04-09 16:08:52 +02:00
Raymond Kuiper
50b7ede81b 🔧 quick dockerfile fix 2025-04-09 16:03:45 +02:00
Raymond Kuiper
3e52edef2d Merge branch 'main' into develop 2025-04-09 15:58:37 +02:00
Raymond Kuiper
4449e040ce 🐛 added check for empty usermacro value. 2025-04-09 15:49:38 +02:00
Raymond Kuiper
aa6be1312e Merge pull request #109 from mathieumd/patch-1
Update README.md
2025-03-28 09:54:19 +01:00
Mathieu MD
50c13c20cb Update README.md
Use Bash syntax
2025-03-28 09:11:14 +01:00
Mathieu MD
964045f53e Update README.md
- Fix #108 
- Enhance a few manual installation details
2025-03-28 09:09:28 +01:00
Wouter de Bruijn
6bdaf4e5b7 🐛 Permission fixes 2025-02-28 15:30:06 +01:00
Wouter de Bruijn
5a3467538e 🔧 Changed user for docker container 2025-02-28 15:26:54 +01:00
Wouter de Bruijn
50918e43fa 🔧 Changed user for docker container 2025-02-28 15:25:18 +01:00
Wouter de Bruijn
7781bc6732 🚨 "Fixed" linter warnings 2025-02-26 14:54:20 +01:00
Wouter de Bruijn
9ab5e09dd5 💡 Added docstring for module 2025-02-26 14:54:08 +01:00
Wouter de Bruijn
886c5b24b9 🔊 Improved log levels 2025-02-26 14:45:20 +01:00
Wouter de Bruijn
b314b2c883 🚨 Formatted and linted files 2025-02-26 14:00:18 +01:00
Wouter de Bruijn
0c798ec968 Added quiet param 2025-02-26 11:10:56 +01:00
Wouter de Bruijn
a5312365f9 📄 Added new cli params 2025-02-26 10:11:47 +01:00
Wouter de Bruijn
53066d2d51 Added separate log levels 2025-02-26 10:09:35 +01:00
Wouter de Bruijn
525904cf43 🚨 Linted and formatted file 2025-02-26 10:07:51 +01:00
Twan Kamans
1e269780ce Merge pull request #103 from q1x/new-ghcr-workflow
 VM inventory, usermacro and tag support
2025-02-20 15:45:02 +01:00
Twan Kamans
15d63ce3b8 Merge pull request #102 from TheNetworkGuy/main
Merge pull request #94 from TheNetworkGuy/develop
2025-02-20 15:39:47 +01:00
Raymond Kuiper
c810b06718 Merge pull request #7 from q1x/main
Update Dockerfile
2025-02-20 11:49:08 +01:00
Raymond Kuiper
825d788cfe Update Dockerfile 2025-02-20 11:42:25 +01:00
Raymond Kuiper
48a04c58e3 Merge pull request #6 from q1x/new-ghcr-workflow
New ghcr workflow
2025-02-20 11:29:16 +01:00
Raymond Kuiper
733df33b71 added step to run linting tests 2025-02-20 11:02:43 +01:00
Raymond Kuiper
593c8707af New publish-image workflow
Should remove the dependency on PAT
2025-02-20 11:01:04 +01:00
Raymond Kuiper
523393308d Updated docs 2025-02-19 16:25:11 +01:00
Raymond Kuiper
d65fa5b699 Added tag support 2025-02-19 15:56:01 +01:00
Raymond Kuiper
fd70045c6d Minor doc updates 2025-02-17 12:57:57 +01:00
Raymond Kuiper
f9453cc23c Updated documentation for usermacro support 2025-02-17 12:54:11 +01:00
Raymond Kuiper
3d4e7803cc Implemented vm_usermacro_map 2025-02-17 12:48:26 +01:00
Raymond Kuiper
edb9cd6ab6 Merge pull request #5 from q1x/vm_inventory
Sync from upstream
2025-02-14 16:41:46 +01:00
Raymond Kuiper
53d679e638 Merge pull request #4 from TheNetworkGuy/main
Merge from upstream
2025-02-14 16:38:11 +01:00
Raymond Kuiper
72558d3825 Updated docs for VM inventory 2025-02-14 16:35:40 +01:00
Raymond Kuiper
eea7df660a Full usermacro support 2025-02-14 15:18:26 +01:00
Raymond Kuiper
1b831a2d39 Moved Inventory mapping logic to tools module 2025-02-14 09:46:55 +01:00
Raymond Kuiper
6d4e250b23 Working usermacros based on config context 2025-02-14 08:28:10 +01:00
Raymond Kuiper
cebefd681e started work on macro support 2025-02-12 17:43:57 +01:00
Raymond Kuiper
4264dc9b31 Merge pull request #3 from q1x/vm_inventory
Vm inventory
2025-02-12 15:15:43 +01:00
Raymond Kuiper
c67180138e cleanup 2025-02-12 12:39:36 +01:00
Raymond Kuiper
b8bb3fb3f0 removed unsupported fields from vm_inventory_map 2025-02-12 12:36:27 +01:00
Raymond Kuiper
5f78a2c789 removed unsupported field from vm_inventory_map 2025-02-12 12:35:21 +01:00
Raymond Kuiper
1157ed9e64 cleanup 2025-02-12 12:32:42 +01:00
Raymond Kuiper
c7d3dab27c reverted module split, switched to class inheretance instead. Updated config example. 2025-02-12 12:30:28 +01:00
Raymond Kuiper
ba2f77a640 Added Pipfile ignore 2025-02-12 11:25:27 +01:00
Raymond Kuiper
4c91c660a8 removed newline 2025-02-12 11:22:27 +01:00
Raymond Kuiper
8272e34c12 removed pipenv artefacts 2025-02-12 11:20:45 +01:00
Twan Kamans
4c982ff0f5 Merge pull request #94 from TheNetworkGuy/develop
implements fix for hostgroup - host API call
2025-02-05 10:54:05 +01:00
TheNetworkGuy
7a671d6625 Also added backwards support for Zabbix 5 2025-02-04 12:46:00 +01:00
TheNetworkGuy
5617275594 implements fix for hostgroup - host API call 2025-02-04 12:40:13 +01:00
TheNetworkGuy
1673f7bb59 Downgrade to old version of zabbix_utils for Zabbix 7.2. Referenced in #91 2025-01-23 13:54:23 +01:00
Raymond Kuiper
c76e36ad38 Split inventory from the device module and started working on vm inventory support 2024-12-19 16:26:18 +01:00
TheNetworkGuy
b0eee8ad9b Fixed linter problems 2024-12-19 14:50:29 +01:00
Twan K.
9ff6b66c96 Merge pull request #88 from q1x/traversal_fixes
Traversal fixes
2024-12-19 14:44:39 +01:00
Raymond Kuiper
ffb8d5239c Embedded nesting in hostgroup init. 2024-12-18 14:06:40 +01:00
Raymond Kuiper
73d5306898 :Revert "added testing branch"
This reverts commit f301244306.
2024-12-18 13:00:16 +01:00
Raymond Kuiper
f301244306 added testing branch 2024-12-09 18:46:06 +01:00
Raymond Kuiper
867749ddd6 Merge pull request #86 from q1x/main
🏷️ Changed all occurences of "Netbox" to "NetBox"
2024-12-06 14:01:31 +01:00
Raymond Kuiper
d0941ff909 🏷️ Changed all occurences of "Netbox" to "NetBox" as per the [NetBox Style Guide](https://netboxlabs.com/docs/netbox/en/stable/development/style-guide/). 2024-12-06 13:51:05 +01:00
Raymond Kuiper
434722df53 Merge pull request #83 from retigra/main
 Added support for custom CA contexts within ZabbixAPI
2024-12-06 13:10:47 +01:00
Wouter de Bruijn
9131c940c5 📝 Added custom CA-bundle example 2024-12-05 14:35:25 +01:00
Wouter de Bruijn
8b670ba395 Added support for custom CA contexts within ZabbixAPI 2024-12-05 13:59:12 +01:00
TheNetworkGuy
4ec8036c88 Implemented #81 2024-11-21 08:38:42 +01:00
TheNetworkGuy
81764b589a Removed some forgotten code lines from testing 2024-11-18 14:11:38 +01:00
43 changed files with 7833 additions and 1438 deletions
+17
View File
@@ -0,0 +1,17 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:3.14",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "pip install --user uv && uv sync --frozen --dev"
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
+44 -35
View File
@@ -1,46 +1,55 @@
name: Publish Docker image to GHCR on a new version
---
name: Build and Push Docker Image
on:
push:
branches:
- main
- dockertest
# tags:
# - [0-9]+.*
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
release:
types: [published]
pull_request:
types: [opened, synchronize]
jobs:
test_quality:
uses: ./.github/workflows/quality.yml
build_and_publish:
uses: ./.github/workflows/quality.yml
test_code:
uses: ./.github/workflows/run_tests.yml
build:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v4
- name: Log in to the container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GHCR_PAT }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{ version }}
type=ref,event=branch
type=raw,value=latest,enable=${{ github.ref == format('refs/heads/{0}', 'master') }}
type=sha
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5
- name: Login to GitHub Container Registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@369eb591f429131d6889c46b94e711f089e6ca96
with:
images: ghcr.io/${{ github.repository }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
- name: Build and push Docker image
uses: docker/build-push-action@ca877d9245402d1537745e0e356eab47c3520991
with:
context: .
file: ./Dockerfile
push: true
platforms: linux/amd64,linux/arm64
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
annotations: |
index:org.opencontainers.image.description=Python script to synchronise NetBox devices to Zabbix.
+33
View File
@@ -0,0 +1,33 @@
name: Upload Python Package to PyPI when a Release is Created
permissions:
contents: read
on:
release:
types: [published]
jobs:
pypi-publish:
name: Publish release to PyPI
runs-on: ubuntu-latest
environment:
name: release
url: https://pypi.org/p/netbox-zabbix-sync
permissions:
id-token: write
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Set up Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065
with:
python-version: "3.x"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel build
- name: Build package
run: |
python -m build
- name: Publish package distributions to PyPI
uses: pypa/gh-action-pypi-publish@76f52bc884231f62b9a034ebfe128415bbaabdfc
+21 -18
View File
@@ -1,26 +1,29 @@
---
name: Pylint Quality control
name: Code Quality
on:
workflow_call
on:
pull_request:
workflow_call:
jobs:
build:
lint:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11","3.12"]
python-version: ["3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pylint
pip install -r requirements.txt
- name: Analysing the code with pylint
run: |
pylint --module-naming-style=any $(git ls-files '*.py')
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
enable-cache: true
- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}
- name: Install dependencies
run: uv sync --dev
- name: Lint with ruff
run: uv run ruff check .
- name: Format check with ruff
run: uv run ruff format --check .
- name: Type check with ty
run: uv run ty check
+27
View File
@@ -0,0 +1,27 @@
---
name: Tests
on:
pull_request:
workflow_call:
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
enable-cache: true
- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}
- name: Install dependencies
run: uv sync --dev
- name: Copy example config
run: cp config.py.example config.py
- name: Run tests with coverage
run: uv run pytest tests --cov --cov-report=term --cov-fail-under=70
Vendored
+13 -1
View File
@@ -1,6 +1,18 @@
*.log
.venv
config.py
.env
/config.py
Pipfile
Pipfile.lock
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
.vscode
.flake
.coverage
*.egg-info
dist
build
netbox_zabbix_sync/_version.py
+1
View File
@@ -0,0 +1 @@
3.12
+12 -1
View File
@@ -1,9 +1,20 @@
# syntax=docker/dockerfile:1
FROM python:3.12-alpine
RUN mkdir -p /opt/netbox-zabbix && chown -R 1000:1000 /opt/netbox-zabbix
RUN mkdir -p /opt/netbox-zabbix
COPY . /opt/netbox-zabbix
RUN addgroup -g 1000 -S netbox-zabbix && adduser -u 1000 -S netbox-zabbix -G netbox-zabbix
RUN chown -R 1000:1000 /opt/netbox-zabbix
WORKDIR /opt/netbox-zabbix
COPY --chown=1000:1000 . /opt/netbox-zabbix
USER 1000:1000
RUN if ! [ -f ./config.py ]; then cp ./config.py.example ./config.py; fi
USER root
RUN pip install -r ./requirements.txt
USER 1000:1000
ENTRYPOINT ["python"]
CMD ["/opt/netbox-zabbix/netbox_zabbix_sync.py", "-v"]
+556 -136
View File
@@ -1,20 +1,25 @@
# NetBox to Zabbix synchronization
# Netbox to Zabbix synchronization
A script to create, update and delete Zabbix hosts using NetBox device objects. Tested and compatible with all [currently supported Zabbix releases](https://www.zabbix.com/life_cycle_and_release_policy).
A script to create, update and delete Zabbix hosts using Netbox device objects.
# Documentation
Documentation will be moved to the Github wiki of this project. Feel free to [check it out](https://github.com/TheNetworkGuy/netbox-zabbix-sync/wiki)!
## Installation via Docker
To pull the latest stable version to your local cache, use the following docker pull command:
```
To pull the latest stable version to your local cache, use the following docker
pull command:
```bash
docker pull ghcr.io/thenetworkguy/netbox-zabbix-sync:main
```
Make sure to specify the needed environment variables for the script to work (see [here](#set-environment-variables))
on the command line or use an [env file](https://docs.docker.com/reference/cli/docker/container/run/#env).
Make sure to specify the needed environment variables for the script to work
(see [here](#set-environment-variables)) on the command line or use an
[env file](https://docs.docker.com/reference/cli/docker/container/run/#env).
```
```bash
docker run -d -t -i -e ZABBIX_HOST='https://zabbix.local' \
-e ZABBIX_TOKEN='othersecrettoken' \
-e NETBOX_HOST='https://netbox.local' \
@@ -22,37 +27,56 @@ docker run -d -t -i -e ZABBIX_HOST='https://zabbix.local' \
--name netbox-zabbix-sync ghcr.io/thenetworkguy/netbox-zabbix-sync:main
```
This should run a one-time sync, you can check the sync with `docker logs netbox-zabbix-sync`.
This should run a one-time sync. You can check the sync with
`docker logs netbox-zabbix-sync`.
The image uses the default `config.py` for it's configuration, you can use a volume mount in the docker run command
to override with your own config file if needed (see [config file](#config-file)):
The image uses the default `config.py` for its configuration, you can use a
volume mount in the docker run command to override with your own config file if
needed (see [config file](#config-file)):
```bash
docker run -d -t -i -v $(pwd)/config.py:/opt/netbox-zabbix/config.py ...
```
docker run -d -t -i -v $(pwd)/config.py:/opt/netbox-zabbix/config.py ...
```
## Installation from Source
### Cloning the repository
```
```bash
git clone https://github.com/TheNetworkGuy/netbox-zabbix-sync.git
```
### Packages
Make sure that you have a python environment with the following packages installed. You can also use the `requirements.txt` file for installation with pip.
```
Make sure that you have a python environment with the following packages
installed. You can also use the `requirements.txt` file for installation with
pip.
```sh
# Packages:
pynetbox
pyzabbix
zabbix-utils
# Install them through requirements.txt from a venv:
virtualenv .venv
source .venv/bin/activate
.venv/bin/pip --require-virtualenv install -r requirements.txt
```
### Config file
First time user? Copy the `config.py.example` file to `config.py`. This file is used for modifying filters and setting variables such as custom field names.
```
First time user? Copy the `config.py.example` file to `config.py`. This file is
used for modifying filters and setting variables such as custom field names.
```sh
cp config.py.example config.py
```
### Set environment variables
Set the following environment variables:
```
```bash
ZABBIX_HOST="https://zabbix.local"
ZABBIX_USER="username"
ZABBIX_PASS="Password"
@@ -60,15 +84,25 @@ NETBOX_HOST="https://netbox.local"
NETBOX_TOKEN="secrettoken"
```
Or, you can use a Zabbix API token to login instead of using a username and password.
In that case `ZABBIX_USER` and `ZABBIX_PASS` will be ignored.
Or, you can use a Zabbix API token to login instead of using a username and
password. In that case `ZABBIX_USER` and `ZABBIX_PASS` will be ignored.
```
```bash
ZABBIX_TOKEN=othersecrettoken
```
### Netbox custom fields
Use the following custom fields in Netbox (if you are using config context for the template information then the zabbix_template field is not required):
If you are using custom SSL certificates for NetBox and/or Zabbix, you can set
the following environment variable to the path of your CA bundle file:
```sh
export REQUESTS_CA_BUNDLE=/path/to/your/ca-bundle.crt
```
### NetBox custom fields
Use the following custom fields in NetBox (if you are using config context for
the template information then the zabbix_template field is not required):
```
* Type: Integer
* Name: zabbix_hostid
@@ -76,6 +110,7 @@ Use the following custom fields in Netbox (if you are using config context for t
* Default: null
* Object: dcim > device
```
```
* Type: Text
* Name: zabbix_template
@@ -83,154 +118,271 @@ Use the following custom fields in Netbox (if you are using config context for t
* Default: null
* Object: dcim > device_type
```
You can make the `zabbix_hostid` field hidden or read-only to prevent human intervention.
This is optional and there is a use case for leaving it read-write in the UI to manually change the ID. For example to re-run a sync.
You can make the `zabbix_hostid` field hidden or read-only to prevent human
intervention.
This is optional, but there may be cases where you want to leave it
read-write in the UI. For example to manually change or clear the ID and re-run a sync.
## Virtual Machine (VM) Syncing
In order to use VM syncing, make sure that the `zabbix_id` custom field is also present on Virtual machine objects in Netbox.
In order to use VM syncing, make sure that the `zabbix_id` custom field is also
present on Virtual machine objects in NetBox.
Use the `config.py` file and set the `sync_vms` variable to `True`.
You can set the `vm_hostgroup_format` variable to a customizable value for VM hostgroups. The default is `cluster_type/cluster/role`.
You can set the `vm_hostgroup_format` variable to a customizable value for VM
hostgroups. The default is `cluster_type/cluster/role`.
To enable filtering for VM's, check the `nb_vm_filter` variable out. It works the same as with the device filter (see documentation under "Hostgroup layout"). Note that not all filtering capabilities and properties of devices are applicable to VM's and vice-versa. Check the Netbox API documentation to see which filtering options are available for each object type.
To enable filtering for VM's, check the `nb_vm_filter` variable out. It works
the same as with the device filter (see documentation under "Hostgroup layout").
Note that not all filtering capabilities and properties of devices are
applicable to VM's and vice-versa. Check the NetBox API documentation to see
which filtering options are available for each object type.
## Config file
### Hostgroup
Setting the `create_hostgroups` variable to `False` requires manual hostgroup creation for devices in a new category. I would recommend setting this variable to `True` since leaving it on `False` results in a lot of manual work.
The format can be set with the `hostgroup_format` variable for devices and `vm_hostgroup_format` for devices.
Setting the `create_hostgroups` variable to `False` requires manual hostgroup
creation for devices in a new category. I would recommend setting this variable
to `True` since leaving it on `False` results in a lot of manual work.
Any nested parent hostgroups will also be created automatically. For instance the region `Berlin` with parent region `Germany` will create the hostgroup `Germany/Berlin`.
The format can be set with the `hostgroup_format` variable for devices and
`vm_hostgroup_format` for virtual machines.
Make sure that the Zabbix user has proper permissions to create hosts.
The hostgroups are in a nested format. This means that proper permissions only need to be applied to the site name hostgroup and cascaded to any child hostgroups.
Any nested parent hostgroups will also be created automatically. For instance
the region `Berlin` with parent region `Germany` will create the hostgroup
`Germany/Berlin`.
Make sure that the Zabbix user has proper permissions to create hosts. The
hostgroups are in a nested format. This means that proper permissions only need
to be applied to the site name hostgroup and cascaded to any child hostgroups.
#### Layout
The default hostgroup layout is "site/manufacturer/device_role".
You can change this behaviour with the hostgroup_format variable. The following values can be used:
The default hostgroup layout is "site/manufacturer/device_role". You can change
this behaviour with the hostgroup_format variable. The following values can be
used:
**Both devices and virtual machines**
| name | description |
| ------------ | ------------ |
|role|Role name of a device or VM|
|region|The region name|
|site|Site name|
|site_group|Site group name|
|tenant|Tenant name|
|tenant_group|Tenant group name|
|platform|Software platform of a device or VM|
|custom fields|See the section "Layout -> Custom Fields" to use custom fields as hostgroup variable|
| name | description |
| ------------- | ------------------------------------------------------------------------------------ |
| role | Role name of a device or VM |
| region | The region name |
| site | Site name |
| site_group | Site group name |
| tenant | Tenant name |
| tenant_group | Tenant group name |
| platform | Software platform of a device or VM |
| custom fields | See the section "Layout -> Custom Fields" to use custom fields as hostgroup variable |
**Only for devices**
| name | description |
| ------------ | ------------ |
|location|The device location name|
|manufacturer|Device manufacturer name|
| name | description |
| ------------ | ------------------------ |
| location | The device location name |
| manufacturer | Device manufacturer name |
| rack | Rack |
**Only for VMs**
| name | description |
| ------------ | ------------ |
|cluster|VM cluster name|
|cluster_type|VM cluster type|
| name | description |
| ------------ | --------------- |
| cluster | VM cluster name |
| cluster_type | VM cluster type |
| device | parent device |
You can specify the value separated by a "/" like so:
```python
hostgroup_format = "tenant/site/location/role"
```
You can also provice a list of groups like so:
```python
hostgroup_format = ["region/site_group/site", "role", "tenant_group/tenant"]
```
You can specify the value sperated by a "/" like so:
```
hostgroup_format = "tenant/site/dev_location/role"
```
**Group traversal**
The default behaviour for `region` is to only use the directly assigned region in the rendered hostgroup name.
However, by setting `traverse_region` to `True` in `config.py` the script will render a full region path of all parent regions for the hostgroup name.
`traverse_site_groups` controls the same behaviour for site_groups.
The default behaviour for `region` is to only use the directly assigned region
in the rendered hostgroup name. However, by setting `traverse_region` to `True`
in `config.py` the script will render a full region path of all parent regions
for the hostgroup name. `traverse_site_groups` controls the same behaviour for
site_groups.
**Hardcoded text**
You can add hardcoded text in the hostgroup format by using quotes, this will
insert the literal text:
```python
hostgroup_format = "'MyDevices'/location/role"
```
In this case, the prefix MyDevices will be used for all generated groups.
**Custom fields**
You can use the value of custom fields for hostgroup generation. This allows more freedom and even allows a full static mapping instead of a dynamic rendered hostgroup name.
You can use the value of custom fields for hostgroup generation. This allows
more freedom and even allows a full static mapping instead of a dynamic rendered
hostgroup name.
For instance a custom field with the name `mycustomfieldname` and type string
has the following values for 2 devices:
For instance a custom field with the name `mycustomfieldname` and type string has the following values for 2 devices:
```
Device A has the value Train for custom field mycustomfieldname.
Device B has the value Bus for custom field mycustomfieldname.
Both devices are located in the site Paris.
```
With the hostgroup format `site/mycustomfieldname` the following hostgroups will be generated:
With the hostgroup format `site/mycustomfieldname` the following hostgroups will
be generated:
```
Device A: Paris/Train
Device B: Paris/Bus
```
**Empty variables or hostgroups**
Should the content of a variable be empty, then the hostgroup position is skipped.
Should the content of a variable be empty, then the hostgroup position is
skipped.
For example, consider the following scenario with 2 devices, both the same
device type and site. One of them is linked to a tenant, the other one does not
have a relationship with a tenant.
For example, consider the following scenario with 2 devices, both the same device type and site. One of them is linked to a tenant, the other one does not have a relationship with a tenant.
- Device_role: PDU
- Site: HQ-AMS
```python
hostgroup_format = "site/tenant/role"
```
hostgroup_format = "site/tenant/device_role"
```
When running the script like above, the following hostgroup (HG) will be generated for both hosts:
- Device A with no relationship with a tenant: HQ-AMS/PDU
- Device B with a relationship to tenant "Fork Industries": HQ-AMS/Fork Industries/PDU
When running the script like above, the following hostgroup (HG) will be
generated for both hosts:
- Device A with no relationship with a tenant: HQ-AMS/PDU
- Device B with a relationship to tenant "Fork Industries": HQ-AMS/Fork
Industries/PDU
The same logic applies to custom fields being used in the HG format:
```
```python
hostgroup_format = "site/mycustomfieldname"
```
For device A with the value "ABC123" in the custom field "mycustomfieldname" -> HQ-AMS/ABC123
For a device which does not have a value in the custom field "mycustomfieldname" -> HQ-AMS
Should there be a scenario where a custom field does not have a value under a device, and the HG format only uses this single variable, then this will result in an error:
For device A with the value "ABC123" in the custom field "mycustomfieldname" ->
HQ-AMS/ABC123 For a device which does not have a value in the custom field
"mycustomfieldname" -> HQ-AMS
Should there be a scenario where a custom field does not have a value under a
device, and the HG format only uses this single variable, then this will result
in an error:
```
hostgroup_format = "mycustomfieldname"
Netbox-Zabbix-sync - ERROR - ESXI1 has no reliable hostgroup. This is most likely due to the use of custom fields that are empty.
NetBox-Zabbix-sync - ERROR - ESXI1 has no reliable hostgroup. This is most likely due to the use of custom fields that are empty.
```
### Device status
By setting a status on a Netbox device you determine how the host is added (or updated) in Zabbix. There are, by default, 3 options:
* Delete the host from Zabbix (triggered by Netbox status "Decommissioning" and "Inventory")
* Create the host in Zabbix but with a disabled status (Trigger by "Offline", "Planned", "Staged" and "Failed")
* Create the host in Zabbix with an enabled status (For now only enabled with the "Active" status)
You can modify this behaviour by changing the following list variables in the script:
- `zabbix_device_removal`
- `zabbix_device_disable`
### Extended site properties
By default, NetBox will only return the following properties under the 'site' key for a device:
- site id
- (api) url
- display name
- name
- slug
- description
However, NetBox-Zabbix-Sync allows you to extend these site properties with the full site information
so you can use this data in inventory fields, tags and usermacros.
To enable this functionality, enable the following setting in your configuration file:
`extended_site_properties = True`
Keep in mind that enabling this option will increase the number of API calls to your NetBox instance,
this might impact performance on large syncs.
### Device status
By setting a status on a NetBox device you determine how the host is added (or
updated) in Zabbix. There are, by default, 3 options:
- Delete the host from Zabbix (triggered by NetBox status "Decommissioning" and
"Inventory")
- Create the host in Zabbix but with a disabled status (Trigger by "Offline",
"Planned", "Staged" and "Failed")
- Create the host in Zabbix with an enabled status (For now only enabled with
the "Active" status)
You can modify this behaviour by changing the following list variables in the
script:
- `zabbix_device_removal`
- `zabbix_device_disable`
### Zabbix Inventory
This script allows you to enable the inventory on managed Zabbix hosts and sync NetBox device properties to the specified inventory fields.
To map Netbox information to Netbox inventory fields, set `inventory_sync` to `True`.
You can set the inventory mode to "disabled", "manual" or "automatic" with the `inventory_mode` variable.
See [Zabbix Manual](https://www.zabbix.com/documentation/current/en/manual/config/hosts/inventory#building-inventory) for more information about the modes.
This script allows you to enable the inventory on managed Zabbix hosts and sync
NetBox device properties to the specified inventory fields. To map NetBox
information to NetBox inventory fields, set `inventory_sync` to `True`.
Use the `inventory_map` variable to map which NetBox properties are used in which Zabbix Inventory fields.
For nested properties, you can use the '/' seperator.
For example, the following map will assign the custom field 'mycustomfield' to the 'alias' Zabbix inventory field:
```
You can set the inventory mode to "disabled", "manual" or "automatic" with the
`inventory_mode` variable. See
[Zabbix Manual](https://www.zabbix.com/documentation/current/en/manual/config/hosts/inventory#building-inventory)
for more information about the modes.
Use the `device_inventory_map` variable to map which NetBox properties are used in
which Zabbix Inventory fields. For nested properties, you can use the '/'
seperator. For example, the following map will assign the custom field
'mycustomfield' to the 'alias' Zabbix inventory field:
For Virtual Machines, use `vm_inventory_map`.
```python
inventory_sync = True
inventory_mode = "manual"
inventory_map = { "custom_fields/mycustomfield/name": "alias"}
device_inventory_map = {"custom_fields/mycustomfield/name": "alias"}
vm_inventory_map = {"custom_fields/mycustomfield/name": "alias"}
```
See `config.py.example` for an extensive example map.
Any Zabix Inventory fields that are not included in the map will not be touched by the script,
so you can safely add manual values or use items to automatically add values to other fields.
See `config.py.example` for an extensive example map. Any Zabbix Inventory fields
that are not included in the map will not be touched by the script, so you can
safely add manual values or use items to automatically add values to other
fields.
### Template source
You can either use a Netbox device type custom field or Netbox config context for the Zabbix template information.
Using a custom field allows for only one template. You can assign multiple templates to one host using the config context source.
Should you make use of an advanced templating structure with lots of nesting then i would recommend sticking to the custom field.
You can either use a NetBox device type custom field or NetBox config context
for the Zabbix template information.
You can change the behaviour in the config file. By default this setting is false but you can set it to true to use config context:
```
Using a custom field allows for only one template. You can assign multiple
templates to one host using the config context source. Should you make use of an
advanced templating structure with lots of nesting then i would recommend
sticking to the custom field.
You can change the behaviour in the config file. By default this setting is
false but you can set it to true to use config context:
```python
templates_config_context = True
```
After that make sure that for each host there is at least one template defined in the config context in this format:
```
After that make sure that for each host there is at least one template defined
in the config context in this format:
```json
{
"zabbix": {
"templates": [
@@ -243,41 +395,251 @@ After that make sure that for each host there is at least one template defined i
}
```
You can also opt for the default device type custom field behaviour but with the added benefit of overwriting the template should a device in Netbox have a device specific context defined. In this case the device specific context template(s) will take priority over the device type custom field template.
```
You can also opt for the default device type custom field behaviour but with the
added benefit of overwriting the template should a device in NetBox have a
device specific context defined. In this case the device specific context
template(s) will take priority over the device type custom field template.
```python
templates_config_context_overrule = True
```
### Tags
This script can sync host tags to your Zabbix hosts for use in filtering,
SLA calculations and event correlation.
Tags can be synced from the following sources:
1. NetBox device/vm tags
2. NetBox config context
3. NetBox fields
Syncing tags will override any tags that were set manually on the host,
making NetBox the single source-of-truth for managing tags.
To enable syncing, turn on `tag_sync` in the config file.
By default, this script will modify tag names and tag values to lowercase.
You can change this behavior by setting `tag_lower` to `False`.
```python
tag_sync = True
tag_lower = True
```
#### Device tags
As NetBox doesn't follow the tag/value pattern for tags, we will need a tag
name set to register the netbox tags.
By default the tag name is "NetBox", but you can change this to whatever you want.
The value for the tag can be set to 'name', 'display', or 'slug', which refers to the
property of the NetBox tag object that will be used as the value in Zabbix.
```python
tag_name = 'NetBox'
tag_value = 'name'
```
#### Config context
You can supply custom tags via config context by adding the following:
```json
{
"zabbix": {
"tags": [
{
"MyTagName": "MyTagValue"
},
{
"environment": "production"
}
],
}
}
```
This will allow you to assign tags based on the config context rules.
#### NetBox Field
NetBox field can also be used as input for tags, just like inventory and usermacros.
To enable syncing from fields, make sure to configure a `device_tag_map` and/or a `vm_tag_map`.
```python
device_tag_map = {"site/name": "site",
"rack/name": "rack",
"platform/name": "target"}
vm_tag_map = {"site/name": "site",
"cluster/name": "cluster",
"platform/name": "target"}
```
To turn off field syncing, set the maps to empty dictionaries:
```python
device_tag_map = {}
vm_tag_map = {}
```
### Usermacros
You can choose to use NetBox as a source for Host usermacros by
enabling the following option in the configuration file:
```python
usermacro_sync = True
```
Please be advised that enabling this option will _clear_ any usermacros
manually set on the managed hosts and override them with the usermacros
from NetBox.
There are two NetBox sources that can be used to populate usermacros:
1. NetBox config context
2. NetBox fields
#### Config context
By defining a dictionary `usermacros` within the `zabbix` key in
config context, you can dynamically assign usermacro values based on
anything that you can target based on
[config contexts](https://netboxlabs.com/docs/netbox/en/stable/features/context-data/)
within NetBox.
Through this method, it is possible to define the following types of usermacros:
1. Text
2. Secret
3. Vault
The default macro type is text, if no `type` and `value` have been set.
It is also possible to create usermacros with
[context](https://www.zabbix.com/documentation/7.0/en/manual/config/macros/user_macros_context).
Examples:
```json
{
"zabbix": {
"usermacros": {
"{$USER_MACRO}": "test value",
"{$CONTEXT_MACRO:\"test\"}": "test value",
"{$CONTEXT_REGEX_MACRO:regex:\".*\"}": "test value",
"{$SECRET_MACRO}": {
"type": "secret",
"value": "PaSsPhRaSe"
},
"{$VAULT_MACRO}": {
"type": "vault",
"value": "secret/vmware:password"
},
"{$USER_MACRO2}": {
"type": "text",
"value": "another test value"
}
}
}
}
```
Please be aware that secret usermacros are only synced _once_ by default.
This is the default behavior because Zabbix API won't return the value of
secrets so the script cannot compare the values with those set in NetBox.
If you update a secret usermacro value, just remove the value from the host
in Zabbix and the new value will be synced during the next run.
Alternatively, you can set the following option in the config file:
```python
usermacro_sync = "full"
```
This will force a full usermacro sync on every run on hosts that have secret usermacros set.
That way, you will know for sure the secret values are always up to date.
Keep in mind that NetBox will show your secrets in plain text.
If true secrecy is required, consider switching to
[vault](https://www.zabbix.com/documentation/current/en/manual/config/macros/secret_macros#vault-secret)
usermacros.
#### Netbox Fields
To use NetBox fields as a source for usermacros, you will need to set up usermacro maps
for devices and/or virtual machines in the configuration file.
This method only supports `text` type usermacros.
For example:
```python
usermacro_sync = True
device_usermacro_map = {"serial": "{$HW_SERIAL}",
"role/name": "{$DEV_ROLE}",
"url": "{$NB_URL}",
"id": "{$NB_ID}"}
vm_usermacro_map = {"memory": "{$TOTAL_MEMORY}",
"role/name": "{$DEV_ROLE}",
"url": "{$NB_URL}",
"id": "{$NB_ID}"}
```
## Permissions
### Netbox
Make sure that the Netbox user has proper permissions for device read and modify (modify to set the Zabbix HostID custom field) operations. The user should also have read-only access to the device types.
### NetBox
Make sure that the NetBox user has proper permissions for device read and modify
(modify to set the Zabbix HostID custom field) operations. The user should also
have read-only access to the device types.
### Zabbix
Make sure that the Zabbix user has permissions to read hostgroups and proxy servers. The user should have full rights on creating, modifying and deleting hosts.
If you want to automatically create hostgroups then the create permission on host-groups should also be applied.
Make sure that the Zabbix user has permissions to read hostgroups and proxy
servers. The user should have full rights on creating, modifying and deleting
hosts.
If you want to automatically create hostgroups then the create permission on
host-groups should also be applied.
### Custom links
To make the user experience easier you could add a custom link that redirects users to the Zabbix latest data.
To make the user experience easier you could add a custom link that redirects
users to the Zabbix latest data.
```
* Name: zabbix_latestData
* Text: {% if object.cf["zabbix_hostid"] %}Show host in Zabbix{% endif %}
* URL: http://myzabbixserver.local/zabbix.php?action=latest.view&hostids[]={{ object.cf["zabbix_hostid"] }}
```
## Running the script
```
python3 netbox_zabbix_sync.py
```
### Flags
| Flag | Option | Description |
| ------------ | ------------ | ------------ |
| -v | verbose | Log with debugging on. |
| Flag | Option | Description |
| ---- | --------- | ------------------------------------- |
| -v | verbose | Log with info on. |
| -vv | debug | Log with debugging on. |
| -vvv | debug-all | Log with debugging on for all modules |
## Config context
### Zabbix proxy
You can set the proxy for a device using the 'proxy' key in config context.
#### Config Context
You can set the proxy for a device using the `proxy` key in config context.
```json
{
"zabbix": {
@@ -285,7 +647,10 @@ You can set the proxy for a device using the 'proxy' key in config context.
}
}
```
It is now posible to specify proxy groups with the introduction of Proxy groups in Zabbix 7. Specifying a group in the config context on older Zabbix releases will have no impact and the script will ignore the statement.
It is now possible to specify proxy groups with the introduction of Proxy groups
in Zabbix 7. Specifying a group in the config context on older Zabbix releases
will have no impact and the script will ignore the statement.
```json
{
@@ -295,7 +660,11 @@ It is now posible to specify proxy groups with the introduction of Proxy groups
}
```
The script will prefer groups when specifying both a proxy and group. This is done with the assumption that groups are more resiliant and HA ready, making it a more logical choice to use for proxy linkage. This also makes migrating from a proxy to proxy group easier since the group take priority over the invidivual proxy.
The script will prefer groups when specifying both a proxy and group. This is
done with the assumption that groups are more resilient and HA ready, making it
a more logical choice to use for proxy linkage. This also makes migrating from a
proxy to proxy group easier since the group take priority over the individual
proxy.
```json
{
@@ -305,33 +674,73 @@ The script will prefer groups when specifying both a proxy and group. This is do
}
}
```
In the example above the host will use the group on Zabbix 7. On Zabbix 6 and below the host will use the proxy. Zabbix 7 will use the proxy value when ommiting the proxy_group value.
Because of the possible amount of destruction when setting up Netbox but forgetting the proxy command, the sync works a bit different. By default everything is synced except in a situation where the Zabbix host has a proxy configured but nothing is configured in Netbox. To force deletion and a full sync, set the `full_proxy_sync` variable in the config file.
In the example above the host will use the group on Zabbix 7. On Zabbix 6 and
below the host will use the proxy. Zabbix 7 will use the proxy value when
omitting the proxy_group value.
### Set interface parameters within Netbox
When adding a new device, you can set the interface type with custom context. By default, the following configuration is applied when no config context is provided:
#### Custom Field
* SNMPv2
* UDP 161
* Bulk requests enabled
* SNMP community: {$SNMP_COMMUNITY}
Alternatively, you can use a custom field for assigning a device or VM to
a Zabbix proxy or proxy group. The custom fields can be assigned to both
Devices and VMs.
Due to Zabbix limitations of changing interface type with a linked template, changing the interface type from within Netbox is not supported and the script will generate an error.
You can also assign these custom fields to a site to allow all devices/VMs
in that site to be configured with the same proxy or proxy group.
In order for this to work, `extended_site_properties` needs to be enabled in
the configuration as well.
For example when changing a SNMP interface to an Agent interface:
```
Netbox-Zabbix-sync - WARNING - Device: Interface OUT of sync.
Netbox-Zabbix-sync - ERROR - Device: changing interface type to 1 is not supported.
To use the custom fields for proxy configuration, configure one or both
of the following settings in the configuration file with the actual names of your
custom fields:
```python
proxy_cf = "zabbix_proxy"
proxy_group_cf = "zabbix_proxy_group"
```
To configure the interface parameters you'll need to use custom context. Custom context was used to make this script as customizable as posible for each environment. For example, you could:
* Set the custom context directly on a device
* Set the custom context on a label, which you would add to a device (for instance, SNMPv3)
* Set the custom context on a device role
* Set the custom context on a site or region
As with config context proxy configuration, proxy group will take precedence over
standalone proxy when configured.
Proxy settings configured on the device or VM will in their turn take precedence
over any site configuration.
If the custom fields have no value but the proxy or proxy group is configured in config context,
that setting will be used.
### Set interface parameters within NetBox
When adding a new device, you can set the interface type with custom context. By
default, the following configuration is applied when no config context is
provided:
- SNMPv2
- UDP 161
- Bulk requests enabled
- SNMP community: {$SNMP_COMMUNITY}
Due to Zabbix limitations of changing interface type with a linked template,
changing the interface type from within NetBox is not supported and the script
will generate an error.
For example, when changing a SNMP interface to an Agent interface:
```
NetBox-Zabbix-sync - WARNING - Device: Interface OUT of sync.
NetBox-Zabbix-sync - ERROR - Device: changing interface type to 1 is not supported.
```
To configure the interface parameters you'll need to use custom context. Custom
context was used to make this script as customizable as possible for each
environment. For example, you could:
- Set the custom context directly on a device
- Set the custom context on a tag, which you would add to a device (for
instance, SNMPv3)
- Set the custom context on a device role
- Set the custom context on a site or region
##### Agent interface configuration example
```json
{
"zabbix": {
@@ -340,7 +749,9 @@ To configure the interface parameters you'll need to use custom context. Custom
}
}
```
##### SNMPv2 interface configuration example
```json
{
"zabbix": {
@@ -354,7 +765,9 @@ To configure the interface parameters you'll need to use custom context. Custom
}
}
```
##### SNMPv3 interface configuration example
```json
{
"zabbix": {
@@ -371,6 +784,13 @@ To configure the interface parameters you'll need to use custom context. Custom
}
```
I would recommend using macros for sensitive data such as community strings since the data in Netbox is plain-text.
I would recommend using usermacros for sensitive data such as community strings
since the data in NetBox is plain-text.
> **_NOTE:_** Not all SNMP data is required for a working configuration.
> [The following parameters are allowed](https://www.zabbix.com/documentation/current/manual/api/reference/hostinterface/object#details_tag "The following parameters are allowed") but
> are not all required, depending on your environment.
> **_NOTE:_** Not all SNMP data is required for a working configuration. [The following parameters are allowed ](https://www.zabbix.com/documentation/current/manual/api/reference/hostinterface/object#details_tag "The following parameters are allowed ")but are not all required, depending on your environment.
+88 -18
View File
@@ -7,11 +7,20 @@ templates_config_context = False
# higher priority then custom field templates
templates_config_context_overrule = False
# Set template and device Netbox "custom field" names
# Set template and device NetBox "custom field" names
# Template_cf is not used when templates_config_context is enabled
template_cf = "zabbix_template"
device_cf = "zabbix_hostid"
# Zabbix host description
# The following options are available for the description of all created hosts in Zabbix
# static: Uses the default static string "Host added by NetBox sync script."
# dynamic: "Uses a predefined dynamic string which resolves the owner of an object and datetime. Recommended for users who use Netbox 4.5+
# custom: Use a custom string such as "This host was created by Zabbix-sync on machine MGMT01.internal". It is also posible to resolve dynamic values in this string using {} markers.
description = "static"
# The timedate format which is used for generating the datetime macro when used in the dynamic description type or custom type.
description_dt_format = "%Y-%m-%d %H:%M:%S"
## Enable clustering of devices with virtual chassis setup
clustering = False
@@ -35,7 +44,7 @@ vm_hostgroup_format = "cluster_type/cluster/role"
# With this option disabled proxy's will only be added and modified for Zabbix hosts.
full_proxy_sync = False
## Netbox to Zabbix device state convertion
## NetBox to Zabbix device state convertion
zabbix_device_removal = ["Decommissioning", "Inventory"]
zabbix_device_disable = ["Offline", "Planned", "Staged", "Failed"]
@@ -53,6 +62,12 @@ hostgroup_format = "site/manufacturer/role"
traverse_regions = False
traverse_site_groups = False
## Extended site properties
# By default, NetBox will only return basic site info for any device or VM.
# By setting `extended_site_properties` to True, the script will query NetBox for additional site info.
# Be aware that this will increase the number of API queries to NetBox.
extended_site_properties = False
## Filtering
# Custom device filter, variable must be present but can be left empty with no filtering.
# A couple of examples:
@@ -62,7 +77,7 @@ traverse_site_groups = False
# nb_device_filter = {"site": ["HQ-AMS", "HQ-FRA"]} #Device must be in either one of these sites
# nb_device_filter = {"site": "HQ-AMS", "tag": "zabbix", "role__n": ["PDU", "console-server"]} #Device must be in site HQ-AMS, have the tag zabbix and must not be part of the PDU or console-server role
# Default device filter, only get devices which have a name in Netbox:
# Default device filter, only get devices which have a name in NetBox:
nb_device_filter = {"name__n": "null"}
# Default filter for VMs
nb_vm_filter = {"name__n": "null"}
@@ -80,19 +95,74 @@ inventory_sync = False
# For nested properties, you can use the '/' seperator.
# For example, the following map will assign the custom field 'mycustomfield' to the 'alias' Zabbix inventory field:
#
# inventory_map = { "custom_fields/mycustomfield/name": "alias"}
# device_inventory_map = { "custom_fields/mycustomfield/name": "alias"}
#
# The following map should provide some nice defaults:
inventory_map = { "asset_tag": "asset_tag",
"virtual_chassis/name": "chassis",
"status/label": "deployment_status",
"location/name": "location",
"latitude": "location_lat",
"longitude": "location_lon",
"comments": "notes",
"name": "name",
"rack/name": "site_rack",
"serial": "serialno_a",
"device_type/model": "type",
"device_type/manufacturer/name": "vendor",
"oob_ip/address": "oob_ip" }
# The following maps should provide some nice defaults:
device_inventory_map = { "asset_tag": "asset_tag",
"virtual_chassis/name": "chassis",
"status/label": "deployment_status",
"location/name": "location",
"latitude": "location_lat",
"longitude": "location_lon",
"comments": "notes",
"name": "name",
"rack/name": "site_rack",
"serial": "serialno_a",
"device_type/model": "type",
"device_type/manufacturer/name": "vendor",
"oob_ip/address": "oob_ip" }
# Replace latitude and longitude with site/latitude and and site/longitude to use
# site geo data. Enable extended_site_properties for this to work!
# We also support inventory mapping on Virtual Machines.
vm_inventory_map = { "status/label": "deployment_status",
"comments": "notes",
"name": "name" }
# To allow syncing of usermacros from NetBox, set to True.
# this will enable both field mapping and config context usermacros.
#
# If set to "full", it will force the update of secret usermacros every run.
# Please see the README.md for more information.
usermacro_sync = False
# device usermacro_map to map NetBox fields to usermacros.
device_usermacro_map = {"serial": "{$HW_SERIAL}",
"role/name": "{$DEV_ROLE}",
"display_url": "{$NB_URL}",
"id": "{$NB_ID}"}
# virtual machine usermacro_map to map NetBox fields to usermacros.
vm_usermacro_map = {"memory": "{$TOTAL_MEMORY}",
"role/name": "{$DEV_ROLE}",
"display_url": "{$NB_URL}",
"id": "{$NB_ID}"}
# To sync host tags to Zabbix, set to True.
tag_sync = False
# Setting tag_lower to True will lower capital letters in tag names and values
# This is more inline with the Zabbix way of working with tags.
#
# You can however set this to False to ensure capital letters are synced to Zabbix tags.
tag_lower = True
# We can sync NetBox device/VM tags to Zabbix, but as NetBox tags don't follow the key/value
# pattern, we need to specify a tag name to register the NetBox tags in Zabbix.
#
# If tag_name is set to False, we won't sync NetBox device/VM tags to Zabbix.
tag_name = 'NetBox'
# We can choose to use 'name', 'slug' or 'display' NetBox tag properties as a value in Zabbix.
# 'name'is used by default.
tag_value = "name"
# device tag_map to map NetBox fields to host tags.
device_tag_map = {"site/name": "site",
"rack/name": "rack",
"platform/name": "target"}
# Virtual machine tag_map to map NetBox fields to host tags.
vm_tag_map = {"site/name": "site",
"cluster/name": "cluster",
"platform/name": "target"}
-774
View File
@@ -1,774 +0,0 @@
#!/usr/bin/env python3
# pylint: disable=invalid-name, logging-not-lazy, too-many-locals, logging-fstring-interpolation, too-many-lines
"""
Device specific handeling for Netbox to Zabbix
"""
from os import sys
from re import search
from logging import getLogger
from zabbix_utils import APIRequestError
from modules.exceptions import (SyncInventoryError, TemplateError, SyncExternalError,
InterfaceConfigError, JournalError)
from modules.interface import ZabbixInterface
from modules.hostgroups import Hostgroup
try:
from config import (
template_cf, device_cf,
traverse_site_groups,
traverse_regions,
inventory_sync,
inventory_mode,
inventory_map
)
except ModuleNotFoundError:
print("Configuration file config.py not found in main directory."
"Please create the file or rename the config.py.example file to config.py.")
sys.exit(0)
class PhysicalDevice():
# pylint: disable=too-many-instance-attributes, too-many-arguments, too-many-positional-arguments
"""
Represents Network device.
INPUT: (Netbox device class, ZabbixAPI class, journal flag, NB journal class)
"""
def __init__(self, nb, zabbix, nb_journal_class, nb_version, journal=None, logger=None):
self.nb = nb
self.id = nb.id
self.name = nb.name
self.visible_name = None
self.status = nb.status.label
self.zabbix = zabbix
self.zabbix_id = None
self.group_id = None
self.nb_api_version = nb_version
self.zbx_template_names = []
self.zbx_templates = []
self.hostgroup = None
self.tenant = nb.tenant
self.config_context = nb.config_context
self.zbxproxy = None
self.zabbix_state = 0
self.journal = journal
self.nb_journals = nb_journal_class
self.inventory_mode = -1
self.inventory = {}
self.logger = logger if logger else getLogger(__name__)
self._setBasics()
def __repr__(self):
return self.name
def __str__(self):
return self.__repr__()
def _setBasics(self):
"""
Sets basic information like IP address.
"""
# Return error if device does not have primary IP.
if self.nb.primary_ip:
self.cidr = self.nb.primary_ip.address
self.ip = self.cidr.split("/")[0]
else:
e = f"Host {self.name}: no primary IP."
self.logger.info(e)
raise SyncInventoryError(e)
# Check if device has custom field for ZBX ID
if device_cf in self.nb.custom_fields:
self.zabbix_id = self.nb.custom_fields[device_cf]
else:
e = f"Host {self.name}: Custom field {device_cf} not present"
self.logger.warning(e)
raise SyncInventoryError(e)
# Validate hostname format.
odd_character_list = ["ä", "ö", "ü", "Ä", "Ö", "Ü", "ß"]
self.use_visible_name = False
if (any(letter in self.name for letter in odd_character_list) or
bool(search('[\u0400-\u04FF]', self.name))):
self.name = f"NETBOX_ID{self.id}"
self.visible_name = self.nb.name
self.use_visible_name = True
self.logger.info(f"Host {self.visible_name} contains special characters. "
f"Using {self.name} as name for the Netbox object "
f"and using {self.visible_name} as visible name in Zabbix.")
else:
pass
def set_hostgroup(self, hg_format, nb_site_groups, nb_regions):
"""Set the hostgroup for this device"""
# Create new Hostgroup instance
hg = Hostgroup("dev", self.nb, self.nb_api_version)
# Set Hostgroup nesting options
hg.set_nesting(traverse_site_groups, traverse_regions, nb_site_groups, nb_regions)
# Generate hostgroup based on hostgroup format
self.hostgroup = hg.generate(hg_format)
def set_template(self, prefer_config_context, overrule_custom):
""" Set Template """
self.zbx_template_names = None
# Gather templates ONLY from the device specific context
if prefer_config_context:
try:
self.zbx_template_names = self.get_templates_context()
except TemplateError as e:
self.logger.warning(e)
return True
# Gather templates from the custom field but overrule
# them should there be any device specific templates
if overrule_custom:
try:
self.zbx_template_names = self.get_templates_context()
except TemplateError:
pass
if not self.zbx_template_names:
self.zbx_template_names = self.get_templates_cf()
return True
# Gather templates ONLY from the custom field
self.zbx_template_names = self.get_templates_cf()
return True
def get_templates_cf(self):
""" Get template from custom field """
# Get Zabbix templates from the device type
device_type_cfs = self.nb.device_type.custom_fields
# Check if the ZBX Template CF is present
if template_cf in device_type_cfs:
# Set value to template
return [device_type_cfs[template_cf]]
# Custom field not found, return error
e = (f"Custom field {template_cf} not "
f"found for {self.nb.device_type.manufacturer.name}"
f" - {self.nb.device_type.display}.")
raise TemplateError(e)
def get_templates_context(self):
""" Get Zabbix templates from the device context """
if "zabbix" not in self.config_context:
e = (f"Host {self.name}: Key 'zabbix' not found in config "
"context for template lookup")
raise TemplateError(e)
if "templates" not in self.config_context["zabbix"]:
e = (f"Host {self.name}: Key 'templates' not found in config "
"context 'zabbix' for template lookup")
raise TemplateError(e)
return self.config_context["zabbix"]["templates"]
def set_inventory(self, nbdevice):
""" Set host inventory """
# Set inventory mode. Default is disabled (see class init function).
if inventory_mode == "disabled":
if inventory_sync:
self.logger.error(f"Host {self.name}: Unable to map Netbox inventory to Zabbix. "
"Inventory sync is enabled in config but inventory mode is disabled.")
return True
if inventory_mode == "manual":
self.inventory_mode = 0
elif inventory_mode == "automatic":
self.inventory_mode = 1
else:
self.logger.error(f"Host {self.name}: Specified value for inventory mode in"
f" config is not valid. Got value {inventory_mode}")
return False
self.inventory = {}
if inventory_sync and self.inventory_mode in [0,1]:
self.logger.debug(f"Host {self.name}: Starting inventory mapper")
# Let's build an inventory dict for each property in the inventory_map
for nb_inv_field, zbx_inv_field in inventory_map.items():
field_list = nb_inv_field.split("/") # convert str to list based on delimiter
# start at the base of the dict...
value = nbdevice
# ... and step through the dict till we find the needed value
for item in field_list:
value = value[item] if value else None
# Check if the result is usable and expected
# We want to apply any int or float 0 values,
# even if python thinks those are empty.
if ((value and isinstance(value, int | float | str )) or
(isinstance(value, int | float) and int(value) ==0)):
self.inventory[zbx_inv_field] = str(value)
elif not value:
# empty value should just be an empty string for API compatibility
self.logger.debug(f"Host {self.name}: Netbox inventory lookup for "
f"'{nb_inv_field}' returned an empty value")
self.inventory[zbx_inv_field] = ""
else:
# Value is not a string or numeral, probably not what the user expected.
self.logger.error(f"Host {self.name}: Inventory lookup for '{nb_inv_field}'"
" returned an unexpected type: it will be skipped.")
self.logger.debug(f"Host {self.name}: Inventory mapping complete. "
f"Mapped {len(list(filter(None, self.inventory.values())))} field(s)")
return True
def isCluster(self):
"""
Checks if device is part of cluster.
"""
return bool(self.nb.virtual_chassis)
def getClusterMaster(self):
"""
Returns chassis master ID.
"""
if not self.isCluster():
e = (f"Unable to proces {self.name} for cluster calculation: "
f"not part of a cluster.")
self.logger.warning(e)
raise SyncInventoryError(e)
if not self.nb.virtual_chassis.master:
e = (f"{self.name} is part of a Netbox virtual chassis which does "
"not have a master configured. Skipping for this reason.")
self.logger.error(e)
raise SyncInventoryError(e)
return self.nb.virtual_chassis.master.id
def promoteMasterDevice(self):
"""
If device is Primary in cluster,
promote device name to the cluster name.
Returns True if succesfull, returns False if device is secondary.
"""
masterid = self.getClusterMaster()
if masterid == self.id:
self.logger.debug(f"Host {self.name} is primary cluster member. "
f"Modifying hostname from {self.name} to " +
f"{self.nb.virtual_chassis.name}.")
self.name = self.nb.virtual_chassis.name
return True
self.logger.debug(f"Host {self.name} is non-primary cluster member.")
return False
def zbxTemplatePrepper(self, templates):
"""
Returns Zabbix template IDs
INPUT: list of templates from Zabbix
OUTPUT: True
"""
# Check if there are templates defined
if not self.zbx_template_names:
e = f"Host {self.name}: No templates found"
self.logger.info(e)
raise SyncInventoryError()
# Set variable to empty list
self.zbx_templates = []
# Go through all templates definded in Netbox
for nb_template in self.zbx_template_names:
template_match = False
# Go through all templates found in Zabbix
for zbx_template in templates:
# If the template names match
if zbx_template['name'] == nb_template:
# Set match variable to true, add template details
# to class variable and return debug log
template_match = True
self.zbx_templates.append({"templateid": zbx_template['templateid'],
"name": zbx_template['name']})
e = f"Host {self.name}: found template {zbx_template['name']}"
self.logger.debug(e)
# Return error should the template not be found in Zabbix
if not template_match:
e = (f"Unable to find template {nb_template} "
f"for host {self.name} in Zabbix. Skipping host...")
self.logger.warning(e)
raise SyncInventoryError(e)
def setZabbixGroupID(self, groups):
"""
Sets Zabbix group ID as instance variable
INPUT: list of hostgroups
OUTPUT: True / False
"""
# Go through all groups
for group in groups:
if group['name'] == self.hostgroup:
self.group_id = group['groupid']
e = f"Host {self.name}: matched group {group['name']}"
self.logger.debug(e)
return True
return False
def cleanup(self):
"""
Removes device from external resources.
Resets custom fields in Netbox.
"""
if self.zabbix_id:
try:
# Check if the Zabbix host exists in Zabbix
zbx_host = bool(self.zabbix.host.get(filter={'hostid': self.zabbix_id},
output=[]))
e = (f"Host {self.name}: was already deleted from Zabbix."
" Removed link in Netbox.")
if zbx_host:
# Delete host should it exists
self.zabbix.host.delete(self.zabbix_id)
e = f"Host {self.name}: Deleted host from Zabbix."
self._zeroize_cf()
self.logger.info(e)
self.create_journal_entry("warning", "Deleted host from Zabbix")
except APIRequestError as e:
message = f"Zabbix returned the following error: {str(e)}."
self.logger.error(message)
raise SyncExternalError(message) from e
def _zeroize_cf(self):
"""Sets the hostID custom field in Netbox to zero,
effectively destroying the link"""
self.nb.custom_fields[device_cf] = None
self.nb.save()
def _zabbixHostnameExists(self):
"""
Checks if hostname exists in Zabbix.
"""
# Validate the hostname or visible name field
if not self.use_visible_name:
zbx_filter = {'host': self.name}
else:
zbx_filter = {'name': self.visible_name}
host = self.zabbix.host.get(filter=zbx_filter, output=[])
return bool(host)
def setInterfaceDetails(self):
"""
Checks interface parameters from Netbox and
creates a model for the interface to be used in Zabbix.
"""
try:
# Initiate interface class
interface = ZabbixInterface(self.nb.config_context, self.ip)
# Check if Netbox has device context.
# If not fall back to old config.
if interface.get_context():
# If device is SNMP type, add aditional information.
if interface.interface["type"] == 2:
interface.set_snmp()
else:
interface.set_default_snmp()
return [interface.interface]
except InterfaceConfigError as e:
message = f"{self.name}: {e}"
self.logger.warning(message)
raise SyncInventoryError(message) from e
def setProxy(self, proxy_list):
"""
Sets proxy or proxy group if this
value has been defined in config context
input: List of all proxies and proxy groups in standardized format
"""
# check if the key Zabbix is defined in the config context
if not "zabbix" in self.nb.config_context:
return False
if ("proxy" in self.nb.config_context["zabbix"] and
not self.nb.config_context["zabbix"]["proxy"]):
return False
# Proxy group takes priority over a proxy due
# to it being HA and therefore being more reliable
# Includes proxy group fix since Zabbix <= 6 should ignore this
proxy_types = ["proxy"]
if str(self.zabbix.version).startswith('7'):
# Only insert groups in front of list for Zabbix7
proxy_types.insert(0, "proxy_group")
for proxy_type in proxy_types:
# Check if the key exists in Netbox CC
if proxy_type in self.nb.config_context["zabbix"]:
proxy_name = self.nb.config_context["zabbix"][proxy_type]
# go through all proxies
for proxy in proxy_list:
# If the proxy does not match the type, ignore and continue
if not proxy["type"] == proxy_type:
continue
# If the proxy name matches
if proxy["name"] == proxy_name:
self.logger.debug(f"Host {self.name}: using {proxy['type']}"
f" {proxy_name}")
self.zbxproxy = proxy
return True
self.logger.warning(f"Host {self.name}: unable to find proxy {proxy_name}")
return False
def createInZabbix(self, groups, templates, proxies,
description="Host added by Netbox sync script."):
"""
Creates Zabbix host object with parameters from Netbox object.
"""
# Check if hostname is already present in Zabbix
if not self._zabbixHostnameExists():
# Set group and template ID's for host
if not self.setZabbixGroupID(groups):
e = (f"Unable to find group '{self.hostgroup}' "
f"for host {self.name} in Zabbix.")
self.logger.warning(e)
raise SyncInventoryError(e)
self.zbxTemplatePrepper(templates)
templateids = []
for template in self.zbx_templates:
templateids.append({'templateid': template['templateid']})
# Set interface, group and template configuration
interfaces = self.setInterfaceDetails()
groups = [{"groupid": self.group_id}]
# Set Zabbix proxy if defined
self.setProxy(proxies)
# Set basic data for host creation
create_data = {"host": self.name,
"name": self.visible_name,
"status": self.zabbix_state,
"interfaces": interfaces,
"groups": groups,
"templates": templateids,
"description": description,
"inventory_mode": self.inventory_mode,
"inventory": self.inventory
}
# If a Zabbix proxy or Zabbix Proxy group has been defined
if self.zbxproxy:
# If a lower version than 7 is used, we can assume that
# the proxy is a normal proxy and not a proxy group
if not str(self.zabbix.version).startswith('7'):
create_data["proxy_hostid"] = self.zbxproxy["id"]
else:
# Configure either a proxy or proxy group
create_data[self.zbxproxy["idtype"]] = self.zbxproxy["id"]
create_data["monitored_by"] = self.zbxproxy["monitored_by"]
# Add host to Zabbix
try:
host = self.zabbix.host.create(**create_data)
self.zabbix_id = host["hostids"][0]
except APIRequestError as e:
e = f"Host {self.name}: Couldn't create. Zabbix returned {str(e)}."
self.logger.error(e)
raise SyncExternalError(e) from None
# Set Netbox custom field to hostID value.
self.nb.custom_fields[device_cf] = int(self.zabbix_id)
self.nb.save()
msg = f"Host {self.name}: Created host in Zabbix."
self.logger.info(msg)
self.create_journal_entry("success", msg)
else:
e = f"Host {self.name}: Unable to add to Zabbix. Host already present."
self.logger.warning(e)
def createZabbixHostgroup(self, hostgroups):
"""
Creates Zabbix host group based on hostgroup format.
Creates multiple when using a nested format.
"""
final_data = []
# Check if the hostgroup is in a nested format and check each parent
for pos in range(len(self.hostgroup.split('/'))):
zabbix_hg = self.hostgroup.rsplit('/', pos)[0]
if self.lookupZabbixHostgroup(hostgroups, zabbix_hg):
# Hostgroup already exists
continue
# Create new group
try:
# API call to Zabbix
groupid = self.zabbix.hostgroup.create(name=zabbix_hg)
e = f"Hostgroup '{zabbix_hg}': created in Zabbix."
self.logger.info(e)
# Add group to final data
final_data.append({'groupid': groupid["groupids"][0], 'name': zabbix_hg})
except APIRequestError as e:
msg = f"Hostgroup '{zabbix_hg}': unable to create. Zabbix returned {str(e)}."
self.logger.error(msg)
raise SyncExternalError(msg) from e
return final_data
def lookupZabbixHostgroup(self, group_list, lookup_group):
"""
Function to check if a hostgroup
exists in a list of Zabbix hostgroups
INPUT: Group list and group lookup
OUTPUT: Boolean
"""
for group in group_list:
if group["name"] == lookup_group:
return True
return False
def updateZabbixHost(self, **kwargs):
"""
Updates Zabbix host with given parameters.
INPUT: Key word arguments for Zabbix host object.
"""
try:
self.zabbix.host.update(hostid=self.zabbix_id, **kwargs)
except APIRequestError as e:
e = (f"Host {self.name}: Unable to update. "
f"Zabbix returned the following error: {str(e)}.")
self.logger.error(e)
raise SyncExternalError(e) from None
self.logger.info(f"Updated host {self.name} with data {kwargs}.")
self.create_journal_entry("info", "Updated host in Zabbix with latest NB data.")
def ConsistencyCheck(self, groups, templates, proxies, proxy_power, create_hostgroups):
# pylint: disable=too-many-branches, too-many-statements
"""
Checks if Zabbix object is still valid with Netbox parameters.
"""
# If group is found or if the hostgroup is nested
if not self.setZabbixGroupID(groups) or len(self.hostgroup.split('/')) > 1:
if create_hostgroups:
# Script is allowed to create a new hostgroup
new_groups = self.createZabbixHostgroup(groups)
for group in new_groups:
# Add all new groups to the list of groups
groups.append(group)
# check if the initial group was not already found (and this is a nested folder check)
if not self.group_id:
# Function returns true / false but also sets GroupID
if not self.setZabbixGroupID(groups) and not create_hostgroups:
e = (f"Host {self.name}: different hostgroup is required but "
"unable to create hostgroup without generation permission.")
self.logger.warning(e)
raise SyncInventoryError(e)
# Prepare templates and proxy config
self.zbxTemplatePrepper(templates)
self.setProxy(proxies)
# Get host object from Zabbix
host = self.zabbix.host.get(filter={'hostid': self.zabbix_id},
selectInterfaces=['type', 'ip',
'port', 'details',
'interfaceid'],
selectGroups=["groupid"],
selectParentTemplates=["templateid"],
selectInventory=list(inventory_map.values()))
if len(host) > 1:
e = (f"Got {len(host)} results for Zabbix hosts "
f"with ID {self.zabbix_id} - hostname {self.name}.")
self.logger.error(e)
raise SyncInventoryError(e)
if len(host) == 0:
e = (f"Host {self.name}: No Zabbix host found. "
f"This is likely the result of a deleted Zabbix host "
f"without zeroing the ID field in Netbox.")
self.logger.error(e)
raise SyncInventoryError(e)
host = host[0]
if host["host"] == self.name:
self.logger.debug(f"Host {self.name}: hostname in-sync.")
else:
self.logger.warning(f"Host {self.name}: hostname OUT of sync. "
f"Received value: {host['host']}")
self.updateZabbixHost(host=self.name)
# Execute check depending on wether the name is special or not
if self.use_visible_name:
if host["name"] == self.visible_name:
self.logger.debug(f"Host {self.name}: visible name in-sync.")
else:
self.logger.warning(f"Host {self.name}: visible name OUT of sync."
f" Received value: {host['name']}")
self.updateZabbixHost(name=self.visible_name)
# Check if the templates are in-sync
if not self.zbx_template_comparer(host["parentTemplates"]):
self.logger.warning(f"Host {self.name}: template(s) OUT of sync.")
# Prepare Templates for API parsing
templateids = []
for template in self.zbx_templates:
templateids.append({'templateid': template['templateid']})
# Update Zabbix with NB templates and clear any old / lost templates
self.updateZabbixHost(templates_clear=host["parentTemplates"],
templates=templateids)
else:
self.logger.debug(f"Host {self.name}: template(s) in-sync.")
for group in host["groups"]:
if group["groupid"] == self.group_id:
self.logger.debug(f"Host {self.name}: hostgroup in-sync.")
break
else:
self.logger.warning(f"Host {self.name}: hostgroup OUT of sync.")
self.updateZabbixHost(groups={'groupid': self.group_id})
if int(host["status"]) == self.zabbix_state:
self.logger.debug(f"Host {self.name}: status in-sync.")
else:
self.logger.warning(f"Host {self.name}: status OUT of sync.")
self.updateZabbixHost(status=str(self.zabbix_state))
# Check if a proxy has been defined
if self.zbxproxy:
# Check if proxy or proxy group is defined
if (self.zbxproxy["idtype"] in host and
host[self.zbxproxy["idtype"]] == self.zbxproxy["id"]):
self.logger.debug(f"Host {self.name}: proxy in-sync.")
# Backwards compatibility for Zabbix <= 6
elif "proxy_hostid" in host and host["proxy_hostid"] == self.zbxproxy["id"]:
self.logger.debug(f"Host {self.name}: proxy in-sync.")
# Proxy does not match, update Zabbix
else:
self.logger.warning(f"Host {self.name}: proxy OUT of sync.")
# Zabbix <= 6 patch
if not str(self.zabbix.version).startswith('7'):
self.updateZabbixHost(proxy_hostid=self.zbxproxy['id'])
# Zabbix 7+
else:
# Prepare data structure for updating either proxy or group
update_data = {self.zbxproxy["idtype"]: self.zbxproxy["id"],
"monitored_by": self.zbxproxy['monitored_by']}
self.updateZabbixHost(**update_data)
else:
# No proxy is defined in Netbox
proxy_set = False
# Check if a proxy is defined. Uses the proxy_hostid key for backwards compatibility
for key in ("proxy_hostid", "proxyid", "proxy_groupid"):
if key in host:
if bool(int(host[key])):
proxy_set = True
if proxy_power and proxy_set:
# Zabbix <= 6 fix
self.logger.warning(f"Host {self.name}: no proxy is configured in Netbox "
"but is configured in Zabbix. Removing proxy config in Zabbix")
if "proxy_hostid" in host and bool(host["proxy_hostid"]):
self.updateZabbixHost(proxy_hostid=0)
# Zabbix 7 proxy
elif "proxyid" in host and bool(host["proxyid"]):
self.updateZabbixHost(proxyid=0, monitored_by=0)
# Zabbix 7 proxy group
elif "proxy_groupid" in host and bool(host["proxy_groupid"]):
self.updateZabbixHost(proxy_groupid=0, monitored_by=0)
# Checks if a proxy has been defined in Zabbix and if proxy_power config has been set
if proxy_set and not proxy_power:
# Display error message
self.logger.error(f"Host {self.name} is configured "
f"with proxy in Zabbix but not in Netbox. The"
" -p flag was ommited: no "
"changes have been made.")
if not proxy_set:
self.logger.debug(f"Host {self.name}: proxy in-sync.")
# Check host inventory mode
if str(host['inventory_mode']) == str(self.inventory_mode):
self.logger.debug(f"Host {self.name}: inventory_mode in-sync.")
else:
self.logger.warning(f"Host {self.name}: inventory_mode OUT of sync.")
self.updateZabbixHost(inventory_mode=str(self.inventory_mode))
if inventory_sync and self.inventory_mode in [0,1]:
# Check host inventory mapping
if host['inventory'] == self.inventory:
self.logger.debug(f"Host {self.name}: inventory in-sync.")
else:
self.logger.warning(f"Host {self.name}: inventory OUT of sync.")
self.updateZabbixHost(inventory=self.inventory)
# If only 1 interface has been found
# pylint: disable=too-many-nested-blocks
if len(host['interfaces']) == 1:
updates = {}
# Go through each key / item and check if it matches Zabbix
for key, item in self.setInterfaceDetails()[0].items():
# Check if Netbox value is found in Zabbix
if key in host["interfaces"][0]:
# If SNMP is used, go through nested dict
# to compare SNMP parameters
if isinstance(item,dict) and key == "details":
for k, i in item.items():
if k in host["interfaces"][0][key]:
# Set update if values don't match
if host["interfaces"][0][key][k] != str(i):
# If dict has not been created, add it
if key not in updates:
updates[key] = {}
updates[key][k] = str(i)
# If SNMP version has been changed
# break loop and force full SNMP update
if k == "version":
break
# Force full SNMP config update
# when version has changed.
if key in updates:
if "version" in updates[key]:
for k, i in item.items():
updates[key][k] = str(i)
continue
# Set update if values don't match
if host["interfaces"][0][key] != str(item):
updates[key] = item
if updates:
# If interface updates have been found: push to Zabbix
self.logger.warning(f"Host {self.name}: Interface OUT of sync.")
if "type" in updates:
# Changing interface type not supported. Raise exception.
e = (f"Host {self.name}: changing interface type to "
f"{str(updates['type'])} is not supported.")
self.logger.error(e)
raise InterfaceConfigError(e)
# Set interfaceID for Zabbix config
updates["interfaceid"] = host["interfaces"][0]['interfaceid']
try:
# API call to Zabbix
self.zabbix.hostinterface.update(updates)
e = f"Host {self.name}: solved interface conflict."
self.logger.info(e)
self.create_journal_entry("info", e)
except APIRequestError as e:
msg = f"Zabbix returned the following error: {str(e)}."
self.logger.error(msg)
raise SyncExternalError(msg) from e
else:
# If no updates are found, Zabbix interface is in-sync
e = f"Host {self.name}: interface in-sync."
self.logger.debug(e)
else:
e = (f"Host {self.name} has unsupported interface configuration."
f" Host has total of {len(host['interfaces'])} interfaces. "
"Manual interfention required.")
self.logger.error(e)
raise SyncInventoryError(e)
def create_journal_entry(self, severity, message):
"""
Send a new Journal entry to Netbox. Usefull for viewing actions
in Netbox without having to look in Zabbix or the script log output
"""
if self.journal:
# Check if the severity is valid
if severity not in ["info", "success", "warning", "danger"]:
self.logger.warning(f"Value {severity} not valid for NB journal entries.")
return False
journal = {"assigned_object_type": "dcim.device",
"assigned_object_id": self.id,
"kind": severity,
"comments": message
}
try:
self.nb_journals.create(journal)
self.logger.debug(f"Host {self.name}: Created journal entry in Netbox")
return True
except JournalError(e) as e:
self.logger.warning("Unable to create journal entry for "
f"{self.name}: NB returned {e}")
return False
return False
def zbx_template_comparer(self, tmpls_from_zabbix):
"""
Compares the Netbox and Zabbix templates with each other.
Should there be a mismatch then the function will return false
INPUT: list of NB and ZBX templates
OUTPUT: Boolean True/False
"""
succesfull_templates = []
# Go through each Netbox template
for nb_tmpl in self.zbx_templates:
# Go through each Zabbix template
for pos, zbx_tmpl in enumerate(tmpls_from_zabbix):
# Check if template IDs match
if nb_tmpl["templateid"] == zbx_tmpl["templateid"]:
# Templates match. Remove this template from the Zabbix templates
# and add this NB template to the list of successfull templates
tmpls_from_zabbix.pop(pos)
succesfull_templates.append(nb_tmpl)
self.logger.debug(f"Host {self.name}: template "
f"{nb_tmpl['name']} is present in Zabbix.")
break
if len(succesfull_templates) == len(self.zbx_templates) and len(tmpls_from_zabbix) == 0:
# All of the Netbox templates have been confirmed as successfull
# and the ZBX template list is empty. This means that
# all of the templates match.
return True
return False
-44
View File
@@ -1,44 +0,0 @@
"""A collection of tools used by several classes"""
def convert_recordset(recordset):
""" Converts netbox RedcordSet to list of dicts. """
recordlist = []
for record in recordset:
recordlist.append(record.__dict__)
return recordlist
def build_path(endpoint, list_of_dicts):
"""
Builds a path list of related parent/child items.
This can be used to generate a joinable list to
be used in hostgroups.
"""
item_path = []
itemlist = [i for i in list_of_dicts if i['name'] == endpoint]
item = itemlist[0] if len(itemlist) == 1 else None
item_path.append(item['name'])
while item['_depth'] > 0:
itemlist = [i for i in list_of_dicts if i['name'] == str(item['parent'])]
item = itemlist[0] if len(itemlist) == 1 else None
item_path.append(item['name'])
item_path.reverse()
return item_path
def proxy_prepper(proxy_list, proxy_group_list):
"""
Function that takes 2 lists and converts them using a
standardized format for further processing.
"""
output = []
for proxy in proxy_list:
proxy["type"] = "proxy"
proxy["id"] = proxy["proxyid"]
proxy["idtype"] = "proxyid"
proxy["monitored_by"] = 1
output.append(proxy)
for group in proxy_group_list:
group["type"] = "proxy_group"
group["id"] = group["proxy_groupid"]
group["idtype"] = "proxy_groupid"
group["monitored_by"] = 2
output.append(group)
return output
+6 -273
View File
@@ -1,273 +1,6 @@
#!/usr/bin/env python3
# pylint: disable=invalid-name, logging-not-lazy, too-many-locals, logging-fstring-interpolation
"""Netbox to Zabbix sync script."""
import logging
import argparse
from os import environ, path, sys
from pynetbox import api
from pynetbox.core.query import RequestError as NBRequestError
from requests.exceptions import ConnectionError as RequestsConnectionError
from zabbix_utils import ZabbixAPI, APIRequestError, ProcessingError
from modules.device import PhysicalDevice
from modules.virtual_machine import VirtualMachine
from modules.tools import convert_recordset, proxy_prepper
from modules.exceptions import EnvironmentVarError, HostgroupError, SyncError
try:
from config import (
templates_config_context,
templates_config_context_overrule,
clustering, create_hostgroups,
create_journal, full_proxy_sync,
zabbix_device_removal,
zabbix_device_disable,
hostgroup_format,
vm_hostgroup_format,
nb_device_filter,
sync_vms,
nb_vm_filter
)
except ModuleNotFoundError:
print("Configuration file config.py not found in main directory."
"Please create the file or rename the config.py.example file to config.py.")
sys.exit(1)
# Set logging
log_format = logging.Formatter('%(asctime)s - %(name)s - '
'%(levelname)s - %(message)s')
lgout = logging.StreamHandler()
lgout.setFormatter(log_format)
lgout.setLevel(logging.DEBUG)
lgfile = logging.FileHandler(path.join(path.dirname(
path.realpath(__file__)), "sync.log"))
lgfile.setFormatter(log_format)
lgfile.setLevel(logging.DEBUG)
logger = logging.getLogger("Netbox-Zabbix-sync")
logger.addHandler(lgout)
logger.addHandler(lgfile)
logger.setLevel(logging.WARNING)
def main(arguments):
"""Run the sync process."""
# pylint: disable=too-many-branches, too-many-statements
# set environment variables
if arguments.verbose:
logger.setLevel(logging.DEBUG)
env_vars = ["ZABBIX_HOST", "NETBOX_HOST", "NETBOX_TOKEN"]
if "ZABBIX_TOKEN" in environ:
env_vars.append("ZABBIX_TOKEN")
else:
env_vars.append("ZABBIX_USER")
env_vars.append("ZABBIX_PASS")
for var in env_vars:
if var not in environ:
e = f"Environment variable {var} has not been defined."
logger.error(e)
raise EnvironmentVarError(e)
# Get all virtual environment variables
if "ZABBIX_TOKEN" in env_vars:
zabbix_user = None
zabbix_pass = None
zabbix_token = environ.get("ZABBIX_TOKEN")
else:
zabbix_user = environ.get("ZABBIX_USER")
zabbix_pass = environ.get("ZABBIX_PASS")
zabbix_token = None
zabbix_host = environ.get("ZABBIX_HOST")
netbox_host = environ.get("NETBOX_HOST")
netbox_token = environ.get("NETBOX_TOKEN")
# Set Netbox API
netbox = api(netbox_host, token=netbox_token, threading=True)
# Check if the provided Hostgroup layout is valid
hg_objects = hostgroup_format.split("/")
allowed_objects = ["location", "role", "manufacturer", "region",
"site", "site_group", "tenant", "tenant_group"]
# Create API call to get all custom fields which are on the device objects
try:
device_cfs = list(netbox.extras.custom_fields.filter(type="text", content_type_id=23))
except RequestsConnectionError:
logger.error(f"Unable to connect to Netbox with URL {netbox_host}."
" Please check the URL and status of Netbox.")
sys.exit(1)
except NBRequestError as e:
logger.error(f"Netbox error: {e}")
sys.exit(1)
for cf in device_cfs:
allowed_objects.append(cf.name)
for hg_object in hg_objects:
if hg_object not in allowed_objects:
e = (f"Hostgroup item {hg_object} is not valid. Make sure you"
" use valid items and seperate them with '/'.")
logger.error(e)
raise HostgroupError(e)
# Set Zabbix API
try:
if not zabbix_token:
zabbix = ZabbixAPI(zabbix_host, user=zabbix_user, password=zabbix_pass)
else:
zabbix = ZabbixAPI(zabbix_host, token=zabbix_token)
zabbix.check_auth()
except (APIRequestError, ProcessingError) as e:
e = f"Zabbix returned the following error: {str(e)}"
logger.error(e)
sys.exit(1)
# Set API parameter mapping based on API version
if not str(zabbix.version).startswith('7'):
proxy_name = "host"
else:
proxy_name = "name"
# Get all Zabbix and Netbox data
netbox_devices = list(netbox.dcim.devices.filter(**nb_device_filter))
netbox_vms = []
if sync_vms:
netbox_vms = list(netbox.virtualization.virtual_machines.filter(**nb_vm_filter))
netbox_site_groups = convert_recordset((netbox.dcim.site_groups.all()))
netbox_regions = convert_recordset(netbox.dcim.regions.all())
netbox_journals = netbox.extras.journal_entries
zabbix_groups = zabbix.hostgroup.get(output=['groupid', 'name'])
zabbix_templates = zabbix.template.get(output=['templateid', 'name'])
zabbix_proxies = zabbix.proxy.get(output=['proxyid', proxy_name])
# Set empty list for proxy processing Zabbix <= 6
zabbix_proxygroups = []
if str(zabbix.version).startswith('7'):
zabbix_proxygroups = zabbix.proxygroup.get(output=["proxy_groupid", "name"])
# Sanitize proxy data
if proxy_name == "host":
for proxy in zabbix_proxies:
proxy['name'] = proxy.pop('host')
# Prepare list of all proxy and proxy_groups
zabbix_proxy_list = proxy_prepper(zabbix_proxies, zabbix_proxygroups)
# Get Netbox API version
nb_version = netbox.version
# Go through all Netbox devices
for nb_vm in netbox_vms:
try:
vm = VirtualMachine(nb_vm, zabbix, netbox_journals, nb_version,
create_journal, logger)
logger.debug(f"Host {vm.name}: started operations on VM.")
vm.set_vm_template()
# Check if a valid template has been found for this VM.
if not vm.zbx_template_names:
continue
vm.set_hostgroup(vm_hostgroup_format,netbox_site_groups,netbox_regions)
# Check if a valid hostgroup has been found for this VM.
if not vm.hostgroup:
continue
# Temporary disable inventory sync for VM's
# vm.set_inventory(nb_vm)
# Checks if device is in cleanup state
if vm.status in zabbix_device_removal:
if vm.zabbix_id:
# Delete device from Zabbix
# and remove hostID from Netbox.
vm.cleanup()
logger.info(f"VM {vm.name}: cleanup complete")
continue
# Device has been added to Netbox
# but is not in Activate state
logger.info(f"VM {vm.name}: skipping since this VM is "
f"not in the active state.")
continue
# Check if the VM is in the disabled state
if vm.status in zabbix_device_disable:
vm.zabbix_state = 1
# Check if VM is already in Zabbix
if vm.zabbix_id:
vm.ConsistencyCheck(zabbix_groups, zabbix_templates,
zabbix_proxy_list, full_proxy_sync,
create_hostgroups)
continue
# Add hostgroup is config is set
if create_hostgroups:
# Create new hostgroup. Potentially multiple groups if nested
hostgroups = vm.createZabbixHostgroup(zabbix_groups)
# go through all newly created hostgroups
for group in hostgroups:
# Add new hostgroups to zabbix group list
zabbix_groups.append(group)
# Add VM to Zabbix
vm.createInZabbix(zabbix_groups, zabbix_templates,
zabbix_proxy_list)
except SyncError:
pass
for nb_device in netbox_devices:
try:
# Set device instance set data such as hostgroup and template information.
device = PhysicalDevice(nb_device, zabbix, netbox_journals, nb_version,
create_journal, logger)
logger.debug(f"Host {device.name}: started operations on device.")
device.set_template(templates_config_context, templates_config_context_overrule)
# Check if a valid template has been found for this VM.
if not device.zbx_template_names:
continue
device.set_hostgroup(hostgroup_format,netbox_site_groups,netbox_regions)
# Check if a valid hostgroup has been found for this VM.
if not device.hostgroup:
continue
device.set_inventory(nb_device)
# Checks if device is part of cluster.
# Requires clustering variable
if device.isCluster() and clustering:
# Check if device is primary or secondary
if device.promoteMasterDevice():
e = (f"Device {device.name}: is "
f"part of cluster and primary.")
logger.info(e)
else:
# Device is secondary in cluster.
# Don't continue with this device.
e = (f"Device {device.name}: is part of cluster "
f"but not primary. Skipping this host...")
logger.info(e)
continue
# Checks if device is in cleanup state
if device.status in zabbix_device_removal:
if device.zabbix_id:
# Delete device from Zabbix
# and remove hostID from Netbox.
device.cleanup()
logger.info(f"Device {device.name}: cleanup complete")
continue
# Device has been added to Netbox
# but is not in Activate state
logger.info(f"Device {device.name}: skipping since this device is "
f"not in the active state.")
continue
# Check if the device is in the disabled state
if device.status in zabbix_device_disable:
device.zabbix_state = 1
# Check if device is already in Zabbix
if device.zabbix_id:
device.ConsistencyCheck(zabbix_groups, zabbix_templates,
zabbix_proxy_list, full_proxy_sync,
create_hostgroups)
continue
# Add hostgroup is config is set
if create_hostgroups:
# Create new hostgroup. Potentially multiple groups if nested
hostgroups = device.createZabbixHostgroup(zabbix_groups)
# go through all newly created hostgroups
for group in hostgroups:
# Add new hostgroups to zabbix group list
zabbix_groups.append(group)
# Add device to Zabbix
device.createInZabbix(zabbix_groups, zabbix_templates,
zabbix_proxy_list)
except SyncError:
pass
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description='A script to sync Zabbix with Netbox device data.'
)
parser.add_argument("-v", "--verbose", help="Turn on debugging.",
action="store_true")
args = parser.parse_args()
main(args)
#!/usr/bin/env python3
from netbox_zabbix_sync.modules.cli import parse_cli
if __name__ == "__main__":
parse_cli()
+5
View File
@@ -0,0 +1,5 @@
"""
Makes core module sync function available at package level for easier imports.
"""
from netbox_zabbix_sync.modules.core import Sync as Sync
+205
View File
@@ -0,0 +1,205 @@
import argparse
import logging
from os import environ
from netbox_zabbix_sync.modules.core import Sync
from netbox_zabbix_sync.modules.exceptions import EnvironmentVarError
from netbox_zabbix_sync.modules.logging import get_logger, set_log_levels, setup_logger
from netbox_zabbix_sync.modules.settings import load_config
# Boolean settings that can be toggled via --flag / --no-flag
_BOOL_ARGS = [
("clustering", "Enable clustering of devices with virtual chassis setup."),
("create_hostgroups", "Enable hostgroup generation (requires Zabbix permissions)."),
("create_journal", "Create NetBox journal entries on changes."),
("sync_vms", "Enable virtual machine sync."),
(
"full_proxy_sync",
"Enable full proxy sync (removes proxies not in config context).",
),
(
"templates_config_context",
"Use config context as the template source instead of a custom field.",
),
(
"templates_config_context_overrule",
"Give config context templates higher priority than custom field templates.",
),
("traverse_regions", "Use the full parent-region path in hostgroup names."),
("traverse_site_groups", "Use the full parent-site-group path in hostgroup names."),
(
"extended_site_properties",
"Fetch additional site info from NetBox (increases API queries).",
),
("inventory_sync", "Sync NetBox device properties to Zabbix inventory."),
("usermacro_sync", "Sync usermacros from NetBox to Zabbix."),
("tag_sync", "Sync host tags to Zabbix."),
("tag_lower", "Lowercase tag names and values before syncing."),
]
# String settings that can be set via --option VALUE
_STR_ARGS = [
("template_cf", "NetBox custom field name for the Zabbix template.", "FIELD"),
("device_cf", "NetBox custom field name for the Zabbix host ID.", "FIELD"),
(
"hostgroup_format",
"Hostgroup path pattern for physical devices (e.g. site/manufacturer/role).",
"PATTERN",
),
(
"vm_hostgroup_format",
"Hostgroup path pattern for virtual machines (e.g. cluster_type/cluster/role).",
"PATTERN",
),
(
"inventory_mode",
"Zabbix inventory mode: disabled, manual, or automatic.",
"MODE",
),
("tag_name", "Zabbix tag name used when syncing NetBox tags.", "NAME"),
(
"tag_value",
"NetBox tag property to use as the Zabbix tag value (name, slug, or display).",
"PROPERTY",
),
]
def _apply_cli_overrides(config: dict, arguments: argparse.Namespace) -> dict:
"""Override loaded config with any values explicitly provided on the CLI."""
for key, _help in _BOOL_ARGS:
cli_val = getattr(arguments, key, None)
if cli_val is not None:
config[key] = cli_val
for key, _help, _meta in _STR_ARGS:
cli_val = getattr(arguments, key, None)
if cli_val is not None:
config[key] = cli_val
return config
def main(arguments):
"""Run the sync process."""
# Set logging
setup_logger()
logger = get_logger()
# Set log levels based on verbosity flags
if arguments.verbose:
set_log_levels(logging.WARNING, logging.INFO)
if arguments.debug:
set_log_levels(logging.WARNING, logging.DEBUG)
if arguments.debug_all:
set_log_levels(logging.DEBUG, logging.DEBUG)
if arguments.quiet:
set_log_levels(logging.ERROR, logging.ERROR)
# Gather environment variables for Zabbix and Netbox communication
env_vars = ["ZABBIX_HOST", "NETBOX_HOST", "NETBOX_TOKEN"]
if "ZABBIX_TOKEN" in environ:
env_vars.append("ZABBIX_TOKEN")
else:
env_vars.append("ZABBIX_USER")
env_vars.append("ZABBIX_PASS")
for var in env_vars:
if var not in environ:
e = f"Environment variable {var} has not been defined."
logger.error(e)
raise EnvironmentVarError(e)
# Get all virtual environment variables
if "ZABBIX_TOKEN" in env_vars:
zabbix_user = None
zabbix_pass = None
zabbix_token = environ.get("ZABBIX_TOKEN")
else:
zabbix_user = environ.get("ZABBIX_USER")
zabbix_pass = environ.get("ZABBIX_PASS")
zabbix_token = None
zabbix_host = environ.get("ZABBIX_HOST")
netbox_host = environ.get("NETBOX_HOST")
netbox_token = environ.get("NETBOX_TOKEN")
# Load config (defaults → config.py → env vars), then apply CLI overrides
config = load_config(config_file=arguments.config)
config = _apply_cli_overrides(config, arguments)
# Run main sync process
syncer = Sync(config=config)
syncer.connect(
nb_host=netbox_host,
nb_token=netbox_token,
zbx_host=zabbix_host,
zbx_user=zabbix_user,
zbx_pass=zabbix_pass,
zbx_token=zabbix_token,
)
syncer.start()
def parse_cli():
"""
Parse command-line arguments and run the main function.
"""
parser = argparse.ArgumentParser(
description="Synchronise NetBox device data to Zabbix."
)
# ── Verbosity ──────────────────────────────────────────────────────────────
parser.add_argument(
"-v", "--verbose", help="Turn on verbose logging.", action="store_true"
)
parser.add_argument(
"-vv", "--debug", help="Turn on debugging.", action="store_true"
)
parser.add_argument(
"-vvv",
"--debug-all",
help="Turn on debugging for all modules.",
action="store_true",
)
parser.add_argument("-q", "--quiet", help="Turn off warnings.", action="store_true")
parser.add_argument(
"-c",
"--config",
help="Path to the config file (default: config.py next to the script or in the current directory).",
metavar="FILE",
default=None,
)
parser.add_argument(
"--version", action="version", version="NetBox-Zabbix Sync 3.4.0"
)
# ── Boolean config overrides ───────────────────────────────────────────────
bool_group = parser.add_argument_group(
"config overrides (boolean)",
"Override boolean settings from config.py. "
"Use --flag to enable or --no-flag to disable. "
"When omitted, the value from config.py (or the built-in default) is used.",
)
for key, help_text in _BOOL_ARGS:
flag = key.replace("_", "-")
bool_group.add_argument(
f"--{flag}",
dest=key,
help=help_text,
action=argparse.BooleanOptionalAction,
default=None,
)
# ── String config overrides ────────────────────────────────────────────────
str_group = parser.add_argument_group(
"config overrides (string)",
"Override string settings from config.py. "
"When omitted, the value from config.py (or the built-in default) is used.",
)
for key, help_text, metavar in _STR_ARGS:
flag = key.replace("_", "-")
str_group.add_argument(
f"--{flag}",
dest=key,
help=help_text,
metavar=metavar,
default=None,
)
args = parser.parse_args()
main(args)
+405
View File
@@ -0,0 +1,405 @@
"""Core component of the sync process"""
import ssl
from os import environ
from typing import Any
from pynetbox import api as nbapi
from pynetbox.core.query import RequestError as NetBoxRequestError
from requests.exceptions import ConnectionError as RequestsConnectionError
from zabbix_utils import APIRequestError, ProcessingError, ZabbixAPI
from netbox_zabbix_sync.modules.device import PhysicalDevice
from netbox_zabbix_sync.modules.exceptions import SyncError
from netbox_zabbix_sync.modules.logging import get_logger
from netbox_zabbix_sync.modules.settings import DEFAULT_CONFIG
from netbox_zabbix_sync.modules.tools import (
convert_recordset,
proxy_prepper,
verify_hg_format,
)
from netbox_zabbix_sync.modules.virtual_machine import VirtualMachine
logger = get_logger()
class Sync:
"""
Class that hosts the main sync process.
This class is used to connect to NetBox and Zabbix and run the sync process.
"""
def __init__(self, config: dict[str, Any] | None = None):
"""
Docstring for __init__
:param self: Description
:param config: Description
"""
self.netbox = None
self.zabbix = None
self.nb_version = None
default_config = DEFAULT_CONFIG.copy()
combined_config = {
**default_config,
**(config if config else {}),
}
self.config: dict[str, Any] = combined_config
def connect(
self, nb_host, nb_token, zbx_host, zbx_user=None, zbx_pass=None, zbx_token=None
):
"""
Docstring for connect
:param self: Description
:param nb_host: Description
:param nb_token: Description
:param zbx_host: Description
:param zbx_user: Description
:param zbx_pass: Description
:param zbx_token: Description
"""
# Initialize Netbox API connection
netbox = nbapi(nb_host, token=nb_token, threading=True)
try:
# Get NetBox version
nb_version = netbox.version
# Test API access by attempting to access a basic endpoint
# This will catch authorization errors early
netbox.dcim.devices.count()
logger.debug("NetBox version is %s.", nb_version)
self.netbox = netbox
self.nb_version = nb_version
except RequestsConnectionError:
logger.error(
"Unable to connect to NetBox with URL %s. Please check the URL and status of NetBox.",
nb_host,
)
return False
except NetBoxRequestError as nb_error:
e = f"NetBox returned the following error: {nb_error}."
logger.error(e)
return False
# Check Netbox API token format based on NetBox version
if not self._validate_netbox_token(nb_token, self.nb_version):
return False
# Set Zabbix API
if (zbx_pass or zbx_user) and zbx_token:
e = (
"Both ZABBIX_PASS, ZABBIX_USER and ZABBIX_TOKEN environment variables are set. "
"Please choose between token or password based authentication."
)
logger.error(e)
return False
try:
ssl_ctx = ssl.create_default_context()
# If a custom CA bundle is set for pynetbox (requests), also use it for the Zabbix API
if environ.get("REQUESTS_CA_BUNDLE", None):
ssl_ctx.load_verify_locations(environ["REQUESTS_CA_BUNDLE"])
if not zbx_token:
logger.debug("Using user/password authentication for Zabbix API.")
self.zabbix = ZabbixAPI(
zbx_host, user=zbx_user, password=zbx_pass, ssl_context=ssl_ctx
)
else:
logger.debug("Using token authentication for Zabbix API.")
self.zabbix = ZabbixAPI(zbx_host, token=zbx_token, ssl_context=ssl_ctx)
self.zabbix.check_auth()
logger.debug("Zabbix version is %s.", self.zabbix.version)
except (APIRequestError, ProcessingError) as zbx_error:
e = f"Zabbix returned the following error: {zbx_error}."
logger.error(e)
return False
return True
def _validate_netbox_token(self, token: str, nb_version: str) -> bool:
"""Validate the format of the NetBox token based on the NetBox version.
:param token: The NetBox token to validate.
:param nb_version: The version of NetBox being used.
:return: True if the token format is valid for the given NetBox version, False otherwise.
"""
support_token_url = (
"https://netboxlabs.com/docs/netbox/integrations/rest-api/#v1-and-v2-tokens" # noqa: S105
)
token_prefix = "nbt_" # noqa: S105
nb_v2_support_version = "4.5"
v2_token = bool(token.startswith(token_prefix) and "." in token)
v2_error_token = bool(token.startswith(token_prefix) and "." not in token)
# Check if the token is passed without a proper key.token format
if v2_error_token:
logger.error(
"It looks like an invalid v2 token was passed. For more info, see %s",
support_token_url,
)
return False
# Warning message for Netbox token v1 with Netbox v4.5 and higher
if not v2_token and nb_version >= nb_v2_support_version:
logger.warning(
"Using Netbox v1 token format. "
"Consider updating to a v2 token. For more info, see %s",
support_token_url,
)
elif v2_token and nb_version < nb_v2_support_version:
logger.error(
"Using Netbox v2 token format with Netbox version lower than 4.5. "
"Revert to v1 token or upgrade Netbox to 4.5 or higher. For more info, see %s",
support_token_url,
)
return False
elif v2_token and nb_version >= nb_v2_support_version:
logger.debug("Using NetBox v2 token format.")
else:
logger.debug("Using NetBox v1 token format.")
return True
def start(self):
"""
Run the NetBox to Zabbix sync process.
"""
if not self.netbox or not self.zabbix:
e = "Not able to start sync: No connection to NetBox or Zabbix API."
logger.error(e)
return False
device_cfs = []
vm_cfs = []
# Create API call to get all custom fields which are on the device objects
device_cfs = list(
self.netbox.extras.custom_fields.filter(
type=["text", "object", "select"], content_types="dcim.device"
)
)
# Check if the provided Hostgroup layout is valid
verify_hg_format(
self.config["hostgroup_format"],
device_cfs=device_cfs,
hg_type="dev",
logger=logger,
)
if self.config["sync_vms"]:
vm_cfs = list(
self.netbox.extras.custom_fields.filter(
type=["text", "object", "select"],
content_types="virtualization.virtualmachine",
)
)
verify_hg_format(
self.config["vm_hostgroup_format"],
vm_cfs=vm_cfs,
hg_type="vm",
logger=logger,
)
# Set API parameter mapping based on API version
proxy_name = "host" if str(self.zabbix.version) < "7" else "name"
# Get all Zabbix and NetBox data
netbox_devices = list(
self.netbox.dcim.devices.filter(**self.config["nb_device_filter"])
)
netbox_vms = []
if self.config["sync_vms"]:
netbox_vms = list(
self.netbox.virtualization.virtual_machines.filter(
**self.config["nb_vm_filter"]
)
)
netbox_site_groups = convert_recordset(self.netbox.dcim.site_groups.all())
netbox_regions = convert_recordset(self.netbox.dcim.regions.all())
netbox_journals = self.netbox.extras.journal_entries
zabbix_groups = self.zabbix.hostgroup.get( # type: ignore
output=["groupid", "name"]
)
zabbix_templates = self.zabbix.template.get( # type: ignore
output=["templateid", "name"]
)
zabbix_proxies = self.zabbix.proxy.get( # type: ignore
output=["proxyid", proxy_name]
)
# Set empty list for proxy processing Zabbix <= 6
zabbix_proxygroups = []
if str(self.zabbix.version) >= "7":
zabbix_proxygroups = self.zabbix.proxygroup.get( # type: ignore
output=["proxy_groupid", "name"]
)
# Sanitize proxy data
if proxy_name == "host":
for proxy in zabbix_proxies:
proxy["name"] = proxy.pop("host")
# Prepare list of all proxy and proxy_groups
zabbix_proxy_list = proxy_prepper(zabbix_proxies, zabbix_proxygroups)
# Go through all NetBox devices
for nb_vm in netbox_vms:
try:
vm = VirtualMachine(
nb_vm,
self.zabbix,
netbox_journals,
self.nb_version,
self.config["create_journal"],
logger,
config=self.config,
)
logger.debug("Host %s: Started operations on VM.", vm.name)
vm.set_vm_template()
# Check if a valid template has been found for this VM.
if not vm.zbx_template_names:
continue
vm.set_hostgroup(
self.config["vm_hostgroup_format"],
netbox_site_groups,
netbox_regions,
)
# Check if a valid hostgroup has been found for this VM.
if not vm.hostgroups:
continue
if self.config["extended_site_properties"] and nb_vm.site:
logger.debug("Host %s: extending site information.", vm.name)
vm.site = convert_recordset(
self.netbox.dcim.sites.filter(id=nb_vm.site.id)
)
vm.set_inventory(nb_vm)
vm.set_usermacros()
vm.set_tags()
# Checks if device is in cleanup state
if vm.status in self.config["zabbix_device_removal"]:
if vm.zabbix_id:
# Delete device from Zabbix
# and remove hostID from self.netbox.
vm.cleanup()
logger.info("Host %s: cleanup complete", vm.name)
continue
# Device has been added to NetBox
# but is not in Activate state
logger.info(
"Host %s: Skipping since this host is not in the active state.",
vm.name,
)
continue
# Check if the VM is in the disabled state
if vm.status in self.config["zabbix_device_disable"]:
vm.zabbix_state = 1
# Add hostgroup if config is set
if self.config["create_hostgroups"]:
# Create new hostgroup. Potentially multiple groups if nested
hostgroups = vm.create_zbx_hostgroup(zabbix_groups)
# go through all newly created hostgroups
for group in hostgroups:
# Add new hostgroups to zabbix group list
zabbix_groups.append(group)
# Check if VM is already in Zabbix
if vm.zabbix_id:
vm.consistency_check(
zabbix_groups,
zabbix_templates,
zabbix_proxy_list,
self.config["full_proxy_sync"],
self.config["create_hostgroups"],
)
continue
# Add VM to Zabbix
vm.create_in_zabbix(zabbix_groups, zabbix_templates, zabbix_proxy_list)
except SyncError:
pass
for nb_device in netbox_devices:
try:
# Set device instance set data such as hostgroup and template information.
device = PhysicalDevice(
nb_device,
self.zabbix,
netbox_journals,
self.nb_version,
self.config["create_journal"],
logger,
config=self.config,
)
logger.debug("Host %s: Started operations on device.", device.name)
device.set_template(
self.config["templates_config_context"],
self.config["templates_config_context_overrule"],
)
# Check if a valid template has been found for this VM.
if not device.zbx_template_names:
continue
device.set_hostgroup(
self.config["hostgroup_format"], netbox_site_groups, netbox_regions
)
# Check if a valid hostgroup has been found for this VM.
if not device.hostgroups:
logger.warning(
"Host %s: has no valid hostgroups, Skipping this host...",
device.name,
)
continue
if self.config["extended_site_properties"] and nb_device.site:
logger.debug("Host %s: extending site information.", device.name)
device.site = convert_recordset(
self.netbox.dcim.sites.filter(id=nb_device.site.id)
)
device.set_inventory(nb_device)
device.set_usermacros()
device.set_tags()
# Checks if device is part of cluster.
# Requires clustering variable
if device.is_cluster() and self.config["clustering"]:
# Check if device is primary or secondary
if device.promote_primary_device():
logger.info(
"Host %s: is part of cluster and primary.", device.name
)
else:
# Device is secondary in cluster.
# Don't continue with this device.
logger.info(
"Host %s: Is part of cluster but not primary. Skipping this host...",
device.name,
)
continue
# Checks if device is in cleanup state
if device.status in self.config["zabbix_device_removal"]:
if device.zabbix_id:
# Delete device from Zabbix
# and remove hostID from NetBox.
device.cleanup()
logger.info("Host %s: cleanup complete", device.name)
continue
# Device has been added to NetBox
# but is not in Activate state
logger.info(
"Host %s: Skipping since this host is not in the active state.",
device.name,
)
continue
# Check if the device is in the disabled state
if device.status in self.config["zabbix_device_disable"]:
device.zabbix_state = 1
# Add hostgroup is config is set
if self.config["create_hostgroups"]:
# Create new hostgroup. Potentially multiple groups if nested
hostgroups = device.create_zbx_hostgroup(zabbix_groups)
# go through all newly created hostgroups
for group in hostgroups:
# Add new hostgroups to zabbix group list
zabbix_groups.append(group)
# Check if device is already in Zabbix
if device.zabbix_id:
device.consistency_check(
zabbix_groups,
zabbix_templates,
zabbix_proxy_list,
self.config["full_proxy_sync"],
self.config["create_hostgroups"],
)
continue
# Add device to Zabbix
device.create_in_zabbix(
zabbix_groups, zabbix_templates, zabbix_proxy_list
)
except SyncError:
pass
self.zabbix.logout()
return True
File diff suppressed because it is too large Load Diff
@@ -1,33 +1,47 @@
#!/usr/bin/env python3
"""
All custom exceptions used for Exception generation
"""
class SyncError(Exception):
""" Class SyncError """
"""Class SyncError"""
class JournalError(Exception):
""" Class SyncError """
"""Class SyncError"""
class SyncExternalError(SyncError):
""" Class SyncExternalError """
"""Class SyncExternalError"""
class SyncInventoryError(SyncError):
""" Class SyncInventoryError """
"""Class SyncInventoryError"""
class SyncDuplicateError(SyncError):
""" Class SyncDuplicateError """
"""Class SyncDuplicateError"""
class EnvironmentVarError(SyncError):
""" Class EnvironmentVarError """
"""Class EnvironmentVarError"""
class InterfaceConfigError(SyncError):
""" Class InterfaceConfigError """
"""Class InterfaceConfigError"""
class ProxyConfigError(SyncError):
""" Class ProxyConfigError """
"""Class ProxyConfigError"""
class HostgroupError(SyncError):
""" Class HostgroupError """
"""Class HostgroupError"""
class TemplateError(SyncError):
""" Class TemplateError """
"""Class TemplateError"""
class UsermacroError(SyncError):
"""Class UsermacroError"""
@@ -0,0 +1,125 @@
"""
Modules that set description of a host in Zabbix
"""
from datetime import datetime
from logging import getLogger
from re import findall as re_findall
class Description:
"""
Class that generates the description for a host in Zabbix based on the configuration provided.
INPUT:
- netbox_object: The NetBox object that is being synced.
- configuration: configuration of the syncer.
Required keys in configuration:
description: Can be "static", "dynamic" or a custom description with macros.
- nb_version: The version of NetBox that is being used.
"""
def __init__(self, netbox_object, configuration, nb_version, logger=None):
self.netbox_object = netbox_object
self.name = self.netbox_object.name
self.configuration = configuration
self.nb_version = nb_version
self.logger = logger or getLogger(__name__)
self._set_default_macro_values()
self._set_defaults()
def _set_default_macro_values(self):
"""
Sets the default macro values for the description.
"""
# Get the datetime format from the configuration,
# or use the default format if not provided
dt_format = self.configuration.get("description_dt_format", "%Y-%m-%d %H:%M:%S")
# Set the datetime macro
try:
datetime_value = datetime.now().strftime(dt_format)
except (ValueError, TypeError) as e:
self.logger.warning(
"Host %s: invalid datetime format '%s': %s. Using default format.",
self.name,
dt_format,
e,
)
datetime_value = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
# Set the owner macro
owner = self.netbox_object.owner if self.nb_version >= "4.5" else ""
# Set the macro list
self.macros = {"{datetime}": datetime_value, "{owner}": owner}
def _resolve_macros(self, description):
"""
Takes a description and resolves the macro's in it.
Returns the description with the macro's resolved.
"""
# Find all macros in the description
provided_macros = re_findall(r"\{\w+\}", description)
# Go through all macros provided in the NB description
for macro in provided_macros:
# If the macro is in the list of default macro values
if macro in self.macros:
# Replace the macro in the description with the value of the macro
description = description.replace(macro, str(self.macros[macro]))
else:
# One of the macro's is invalid.
self.logger.warning(
"Host %s: macro %s is not valid. Failing back to default.",
self.name,
macro,
)
return False
return description
def _set_defaults(self):
"""
Sets the default descriptions for the host.
"""
self.defaults = {
"static": "Host added by NetBox sync script.",
"dynamic": (
"Host by owner {owner} added by NetBox sync script on {datetime}."
),
}
def _custom_override(self):
"""
Checks if the description is mentioned in the config context.
"""
zabbix_config = self.netbox_object.config_context.get("zabbix")
if zabbix_config and "description" in zabbix_config:
return zabbix_config["description"]
return False
def generate(self):
"""
Generates the description for the host.
"""
# First: check if an override is present.
config_context_description = self._custom_override()
if config_context_description is not False:
resolved = self._resolve_macros(config_context_description)
return resolved if resolved else self.defaults["static"]
# Override is not present: continue with config description
description = ""
if "description" not in self.configuration:
# If no description config is provided, use default static
return self.defaults["static"]
if not self.configuration["description"]:
# The configuration is set to False, meaning an empty description
return description
if self.configuration["description"] in self.defaults:
# The description is one of the default options
description = self.defaults[self.configuration["description"]]
else:
# The description is set to a custom description
description = self.configuration["description"]
# Resolve the macro's in the description
final_description = self._resolve_macros(description)
if final_description:
return final_description
return self.defaults["static"]
@@ -1,23 +1,39 @@
"""Module for all hostgroup related code"""
from logging import getLogger
from modules.exceptions import HostgroupError
from modules.tools import build_path
class Hostgroup():
from logging import getLogger
from netbox_zabbix_sync.modules.exceptions import HostgroupError
from netbox_zabbix_sync.modules.tools import build_path, cf_to_string
class Hostgroup:
"""Hostgroup class for devices and VM's
Takes type (vm or dev) and NB object"""
def __init__(self, obj_type, nb_obj, version, logger=None):
def __init__(
self,
obj_type,
nb_obj,
version,
logger=None,
nested_sitegroup_flag=False,
nested_region_flag=False,
nb_regions=None,
nb_groups=None,
):
self.logger = logger if logger else getLogger(__name__)
if obj_type not in ("vm", "dev"):
msg = f"Unable to create hostgroup with type {type}"
self.logger.error()
self.logger.error(msg)
raise HostgroupError(msg)
self.type = str(obj_type)
self.nb = nb_obj
self.name = self.nb.name
self.nb_version = version
# Used for nested data objects
self.nested_objects = {}
self.set_nesting(
nested_sitegroup_flag, nested_region_flag, nb_groups, nb_regions
)
self._set_format_options()
def __str__(self):
@@ -34,7 +50,7 @@ class Hostgroup():
format_options = {}
# Set variables for both type of devices
if self.type in ("vm", "dev"):
# Role fix for Netbox <=3
# Role fix for NetBox <=3
role = None
if self.nb_version.startswith(("2", "3")) and self.type == "dev":
role = self.nb.device_role.name if self.nb.device_role else None
@@ -46,90 +62,102 @@ class Hostgroup():
format_options["site_group"] = None
if self.nb.site:
if self.nb.site.region:
format_options["region"] = self.generate_parents("region",
str(self.nb.site.region))
format_options["region"] = self.generate_parents(
"region", str(self.nb.site.region)
)
if self.nb.site.group:
format_options["site_group"] = self.generate_parents("site_group",
str(self.nb.site.group))
format_options["site_group"] = self.generate_parents(
"site_group", str(self.nb.site.group)
)
format_options["role"] = role
format_options["site"] = self.nb.site.name if self.nb.site else None
format_options["tenant"] = str(self.nb.tenant) if self.nb.tenant else None
format_options["tenant_group"] = str(self.nb.tenant.group) if self.nb.tenant else None
format_options["platform"] = self.nb.platform.name if self.nb.platform else None
format_options["tenant_group"] = (
str(self.nb.tenant.group) if self.nb.tenant else None
)
format_options["platform"] = (
self.nb.platform.name if self.nb.platform else None
)
# Variables only applicable for devices
if self.type == "dev":
format_options["manufacturer"] = self.nb.device_type.manufacturer.name
format_options["location"] = str(self.nb.location) if self.nb.location else None
# Variables only applicable for VM's
if self.type == "vm":
# Check if a cluster is configured. Could also be configured in a site.
if self.nb.cluster:
format_options["cluster"] = self.nb.cluster.name
format_options["cluster_type"] = self.nb.cluster.type.name
format_options["location"] = (
str(self.nb.location) if self.nb.location else None
)
format_options["rack"] = self.nb.rack.name if self.nb.rack else None
# Variables only applicable for VM's such as clusters
if self.type == "vm" and self.nb.cluster:
format_options["cluster"] = self.nb.cluster.name
format_options["cluster_type"] = self.nb.cluster.type.name
self.format_options = format_options
self.logger.debug(
"Host %s: Resolved properties for use in hostgroups: %s",
self.name,
self.format_options,
)
def set_nesting(self, nested_sitegroup_flag, nested_region_flag,
nb_groups, nb_regions):
def set_nesting(
self, nested_sitegroup_flag, nested_region_flag, nb_groups, nb_regions
):
"""Set nesting options for this Hostgroup"""
self.nested_objects = {"site_group": {"flag": nested_sitegroup_flag, "data": nb_groups},
"region": {"flag": nested_region_flag, "data": nb_regions}}
self.nested_objects = {
"site_group": {"flag": nested_sitegroup_flag, "data": nb_groups},
"region": {"flag": nested_region_flag, "data": nb_regions},
}
def generate(self, hg_format=None):
def generate(self, hg_format):
"""Generate hostgroup based on a provided format"""
# Set format to default in case its not specified
if not hg_format:
hg_format = "site/manufacturer/role" if self.type == "dev" else "cluster/role"
# Split all given names
hg_output = []
hg_items = hg_format.split("/")
for hg_item in hg_items:
# Check if requested data is available as option for this host
if hg_item not in self.format_options:
# Check if a custom field exists with this name
cf_data = self.custom_field_lookup(hg_item)
# CF does not exist
if not cf_data["result"]:
msg = (f"Unable to generate hostgroup for host {self.name}. "
f"Item type {hg_item} not supported.")
self.logger.error(msg)
raise HostgroupError(msg)
# CF data is populated
if cf_data["cf"]:
hg_output.append(cf_data["cf"])
# If the string is between quotes, use it as a literal in the hostgroup name
minimum_length = 2
if (
len(hg_item) > minimum_length
and hg_item[0] == hg_item[-1]
and hg_item[0] in ("'", '"')
):
hg_output.append(hg_item[1:-1])
else:
# Check if a custom field exists with this name
cf_data = self.custom_field_lookup(hg_item)
# CF does not exist
if not cf_data["result"]:
msg = (
f"Unable to generate hostgroup for host {self.name}. "
f"Item type {hg_item} not supported."
)
self.logger.error(msg)
raise HostgroupError(msg)
# CF data is populated
if cf_data["cf"]:
hg_output.append(cf_to_string(cf_data["cf"]))
continue
# Check if there is a value associated to the variable.
# For instance, if a device has no location, do not use it with hostgroup calculation
hostgroup_value = self.format_options[hg_item]
if hostgroup_value:
hg_output.append(hostgroup_value)
else:
self.logger.info(
"Host %s: Used field '%s' has no value.", self.name, hg_item
)
# Check if the hostgroup is populated with at least one item.
if bool(hg_output):
return "/".join(hg_output)
msg = (f"Unable to generate hostgroup for host {self.name}."
" Not enough valid items. This is most likely"
" due to the use of custom fields that are empty"
" or an invalid hostgroup format.")
self.logger.error(msg)
raise HostgroupError(msg)
def list_formatoptions(self):
"""
Function to easily troubleshoot which values
are generated for a specific device or VM.
"""
print(f"The following options are available for host {self.name}")
for option_type, value in self.format_options.items():
if value is not None:
print(f"{option_type} - {value}")
print("The following options are not available")
for option_type, value in self.format_options.items():
if value is None:
print(f"{option_type}")
msg = (
f"Host {self.name}: Generating hostgroup name for '{hg_format}' failed. "
f"This is most likely due to fields that have no value."
)
self.logger.warning(msg)
return None
def custom_field_lookup(self, hg_category):
"""
Checks if a valid custom field is present in Netbox.
Checks if a valid custom field is present in NetBox.
INPUT: Custom field name
OUTPUT: dictionary with 'result' and 'cf' keys.
"""
@@ -150,11 +178,13 @@ class Hostgroup():
OUTPUT: STRING - Either the single child name or child and parents.
"""
# Check if this type of nesting is supported.
if not nest_type in self.nested_objects:
if nest_type not in self.nested_objects:
return child_object
# If the nested flag is True, perform parent calculation
if self.nested_objects[nest_type]["flag"]:
final_nested_object = build_path(child_object, self.nested_objects[nest_type]["data"])
final_nested_object = build_path(
child_object, self.nested_objects[nest_type]["data"]
)
return "/".join(final_nested_object)
# Nesting is not allowed for this object. Return child_object
return child_object
@@ -1,10 +1,11 @@
#!/usr/bin/env python3
"""
All of the Zabbix interface related configuration
"""
from modules.exceptions import InterfaceConfigError
class ZabbixInterface():
from netbox_zabbix_sync.modules.exceptions import InterfaceConfigError
class ZabbixInterface:
"""Class that represents a Zabbix interface."""
def __init__(self, context, ip):
@@ -15,26 +16,21 @@ class ZabbixInterface():
def _set_default_port(self):
"""Sets default TCP / UDP port for different interface types"""
interface_mapping = {
1: 10050,
2: 161,
3: 623,
4: 12345
}
interface_mapping = {1: 10050, 2: 161, 3: 623, 4: 12345}
# Check if interface type is listed in mapper.
if self.interface['type'] not in interface_mapping:
if self.interface["type"] not in interface_mapping:
return False
# Set default port to interface
self.interface["port"] = str(interface_mapping[self.interface['type']])
self.interface["port"] = str(interface_mapping[self.interface["type"]])
return True
def get_context(self):
""" check if Netbox custom context has been defined. """
"""check if NetBox custom context has been defined."""
if "zabbix" in self.context:
zabbix = self.context["zabbix"]
if "interface_type" in zabbix:
self.interface["type"] = zabbix["interface_type"]
if not "interface_port" in zabbix:
if "interface_port" not in zabbix:
self._set_default_port()
return True
self.interface["port"] = zabbix["interface_port"]
@@ -43,43 +39,50 @@ class ZabbixInterface():
return False
def set_snmp(self):
""" Check if interface is type SNMP """
# pylint: disable=too-many-branches
if self.interface["type"] == 2:
# Checks if SNMP settings are defined in Netbox
"""Check if interface is type SNMP"""
snmp_interface_type = 2
if self.interface["type"] == snmp_interface_type:
# Checks if SNMP settings are defined in NetBox
if "snmp" in self.context["zabbix"]:
snmp = self.context["zabbix"]["snmp"]
self.interface["details"] = {}
details: dict[str, str] = {}
self.interface["details"] = details
# Checks if bulk config has been defined
if "bulk" in snmp:
self.interface["details"]["bulk"] = str(snmp.pop("bulk"))
details["bulk"] = str(snmp.pop("bulk"))
else:
# Fallback to bulk enabled if not specified
self.interface["details"]["bulk"] = "1"
# SNMP Version config is required in Netbox config context
details["bulk"] = "1"
# SNMP Version config is required in NetBox config context
if snmp.get("version"):
self.interface["details"]["version"] = str(snmp.pop("version"))
details["version"] = str(snmp.pop("version"))
else:
e = "SNMP version option is not defined."
raise InterfaceConfigError(e)
# If version 1 or 2 is used, get community string
if self.interface["details"]["version"] in ['1','2']:
if details["version"] in ["1", "2"]:
if "community" in snmp:
# Set SNMP community to confix context value
community = snmp["community"]
else:
# Set SNMP community to default
community = "{$SNMP_COMMUNITY}"
self.interface["details"]["community"] = str(community)
details["community"] = str(community)
# If version 3 has been used, get all
# SNMPv3 Netbox related configs
elif self.interface["details"]["version"] == '3':
items = ["securityname", "securitylevel", "authpassphrase",
"privpassphrase", "authprotocol", "privprotocol",
"contextname"]
# SNMPv3 NetBox related configs
elif details["version"] == "3":
items = [
"securityname",
"securitylevel",
"authpassphrase",
"privpassphrase",
"authprotocol",
"privprotocol",
"contextname",
]
for key, item in snmp.items():
if key in items:
self.interface["details"][key] = str(item)
details[key] = str(item)
else:
e = "Unsupported SNMP version."
raise InterfaceConfigError(e)
@@ -91,13 +94,15 @@ class ZabbixInterface():
raise InterfaceConfigError(e)
def set_default_snmp(self):
""" Set default config to SNMPv2, port 161 and community macro. """
"""Set default config to SNMPv2, port 161 and community macro."""
self.interface = self.skelet
self.interface["type"] = "2"
self.interface["port"] = "161"
self.interface["details"] = {"version": "2",
"community": "{$SNMP_COMMUNITY}",
"bulk": "1"}
self.interface["details"] = {
"version": "2",
"community": "{$SNMP_COMMUNITY}",
"bulk": "1",
}
def set_default_agent(self):
"""Sets interface to Zabbix agent defaults"""
+41
View File
@@ -0,0 +1,41 @@
"""
Logging module for Netbox-Zabbix-sync
"""
import logging
from os import path
logger = logging.getLogger("NetBox-Zabbix-sync")
def get_logger():
"""
Return the logger for Netbox Zabbix Sync
"""
return logger
def setup_logger():
"""
Prepare a logger with stream and file handlers
"""
# Set logging
lgout = logging.StreamHandler()
# Logfile in the project root
project_root = path.dirname(path.dirname(path.realpath(__file__)))
logfile_path = path.join(project_root, "sync.log")
lgfile = logging.FileHandler(logfile_path)
logging.basicConfig(
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
level=logging.WARNING,
handlers=[lgout, lgfile],
)
def set_log_levels(root_level, own_level):
"""
Configure log levels for root and Netbox-Zabbix-sync logger
"""
logging.getLogger().setLevel(root_level)
logger.setLevel(own_level)
+141
View File
@@ -0,0 +1,141 @@
"""
Module for parsing configuration from the top level config.py file
"""
from importlib import util
from logging import getLogger
from os import environ, path
from pathlib import Path
logger = getLogger(__name__)
# PLEASE NOTE: This is a sample config file. Please do NOT make any edits in this file!
# You should create your own config.py and it will overwrite the default config.
DEFAULT_CONFIG = {
"templates_config_context": False,
"templates_config_context_overrule": False,
"template_cf": "zabbix_template",
"device_cf": "zabbix_hostid",
"proxy_cf": False,
"proxy_group_cf": False,
"clustering": False,
"create_hostgroups": True,
"create_journal": False,
"sync_vms": False,
"vm_hostgroup_format": "cluster_type/cluster/role",
"full_proxy_sync": False,
"zabbix_device_removal": ["Decommissioning", "Inventory"],
"zabbix_device_disable": ["Offline", "Planned", "Staged", "Failed"],
"hostgroup_format": "site/manufacturer/role",
"traverse_regions": False,
"traverse_site_groups": False,
"nb_device_filter": {"name__n": "null"},
"nb_vm_filter": {"name__n": "null"},
"inventory_mode": "disabled",
"inventory_sync": False,
"extended_site_properties": False,
"device_inventory_map": {
"asset_tag": "asset_tag",
"virtual_chassis/name": "chassis",
"status/label": "deployment_status",
"location/name": "location",
"latitude": "location_lat",
"longitude": "location_lon",
"comments": "notes",
"name": "name",
"rack/name": "site_rack",
"serial": "serialno_a",
"device_type/model": "type",
"device_type/manufacturer/name": "vendor",
"oob_ip/address": "oob_ip",
},
"vm_inventory_map": {
"status/label": "deployment_status",
"comments": "notes",
"name": "name",
},
"usermacro_sync": False,
"device_usermacro_map": {
"serial": "{$HW_SERIAL}",
"role/name": "{$DEV_ROLE}",
"url": "{$NB_URL}",
"id": "{$NB_ID}",
},
"vm_usermacro_map": {
"memory": "{$TOTAL_MEMORY}",
"role/name": "{$DEV_ROLE}",
"url": "{$NB_URL}",
"id": "{$NB_ID}",
},
"tag_sync": False,
"tag_lower": True,
"tag_name": "NetBox",
"tag_value": "name",
"device_tag_map": {
"site/name": "site",
"rack/name": "rack",
"platform/name": "target",
},
"vm_tag_map": {
"site/name": "site",
"cluster/name": "cluster",
"platform/name": "target",
},
"description_dt_format": "%Y-%m-%d %H:%M:%S",
"description": "static",
}
def load_config(config_file=None):
"""Returns combined config from all sources"""
# Overwrite default config with config file.
# Default config file is config.py but can be overridden by providing a different file path.
conf = load_config_file(
config_default=DEFAULT_CONFIG,
config_file=config_file if config_file else "config.py",
)
# Overwrite default config and config.py with environment variables
for key in conf:
value_setting = load_env_variable(key)
if value_setting is not None:
conf[key] = value_setting
return conf
def load_env_variable(config_environvar):
"""Returns config from environment variable"""
prefix = "NBZX_"
config_environvar = prefix + config_environvar.upper()
if config_environvar in environ:
return environ[config_environvar]
return None
def load_config_file(config_default, config_file="config.py"):
"""Returns config from config.py file"""
# Find the script path and config file next to it.
script_dir = path.dirname(path.dirname(path.dirname(path.abspath(__file__))))
config_path = Path(path.join(script_dir, config_file))
# If the script directory is not found, try the current working directory
if not config_path.exists():
config_path = Path(config_file)
# If both checks fail then fallback to the default config
if not config_path.exists():
return config_default
dconf = config_default.copy()
# Dynamically import the config module
spec = util.spec_from_file_location("config", config_path)
if spec is None or spec.loader is None:
raise ImportError(f"Cannot load config from {config_path}")
config_module = util.module_from_spec(spec)
spec.loader.exec_module(config_module)
# Update DEFAULT_CONFIG with variables from the config module
for key in dconf:
if hasattr(config_module, key):
dconf[key] = getattr(config_module, key)
return dconf
+143
View File
@@ -0,0 +1,143 @@
"""
All of the Zabbix Usermacro related configuration
"""
from logging import getLogger
from netbox_zabbix_sync.modules.tools import field_mapper, remove_duplicates
class ZabbixTags:
"""Class that represents a Zabbix interface."""
def __init__(
self,
nb,
tag_map,
tag_sync=False,
tag_lower=True,
tag_name=None,
tag_value=None,
logger=None,
host=None,
):
self.nb = nb
self.name = host if host else nb.name
self.tag_map = tag_map
self.logger = logger if logger else getLogger(__name__)
self.tags = {}
self.lower = tag_lower
self.tag_name = tag_name
self.tag_value = tag_value
self.tag_sync = tag_sync
self.sync = False
self._set_config()
def __repr__(self):
return self.name
def __str__(self):
return self.__repr__()
def _set_config(self):
"""
Setup class
"""
if self.tag_sync:
self.sync = True
return True
def validate_tag(self, tag_name):
"""
Validates tag name
"""
max_tag_name_length = 256
return (
tag_name
and isinstance(tag_name, str)
and len(tag_name) <= max_tag_name_length
)
def validate_value(self, tag_value):
"""
Validates tag value
"""
max_tag_value_length = 256
return (
tag_value
and isinstance(tag_value, str)
and len(tag_value) <= max_tag_value_length
)
def render_tag(self, tag_name, tag_value):
"""
Renders a tag
"""
tag = {}
if self.validate_tag(tag_name):
if self.lower:
tag["tag"] = tag_name.lower()
else:
tag["tag"] = tag_name
else:
self.logger.warning("Tag '%s' is not a valid tag name, skipping.", tag_name)
return False
if self.validate_value(tag_value):
if self.lower:
tag["value"] = tag_value.lower()
else:
tag["value"] = tag_value
else:
self.logger.info(
"Tag '%s' has an invalid value: '%s', skipping.", tag_name, tag_value
)
return False
return tag
def generate(self):
"""
Generate full set of Usermacros
"""
tags = []
# Parse the field mapper for tags
if self.tag_map:
self.logger.debug("Host %s: Starting tag mapper.", self.nb.name)
field_tags = field_mapper(self.nb.name, self.tag_map, self.nb, self.logger)
for tag, value in field_tags.items():
t = self.render_tag(tag, value)
if t:
tags.append(t)
# Parse NetBox config context for tags
if (
"zabbix" in self.nb.config_context
and "tags" in self.nb.config_context["zabbix"]
and isinstance(self.nb.config_context["zabbix"]["tags"], list)
):
for tag in self.nb.config_context["zabbix"]["tags"]:
if isinstance(tag, dict):
for tagname, value in tag.items():
t = self.render_tag(tagname, value)
if t:
tags.append(t)
# Pull in NetBox device tags if tag_name is set
if self.tag_name and isinstance(self.tag_name, str):
for tag in self.nb.tags:
if (
self.tag_value
and isinstance(self.tag_value, str)
and self.tag_value.lower() in ["display", "name", "slug"]
):
value = tag[self.tag_value]
else:
value = tag["name"]
t = self.render_tag(self.tag_name, value)
if t:
tags.append(t)
tags = remove_duplicates(tags, sortkey="tag")
self.logger.debug("Host %s: Resolved tags: %s", self.name, tags)
return tags
+257
View File
@@ -0,0 +1,257 @@
"""A collection of tools used by several classes"""
from collections.abc import Callable
from typing import Any, cast, overload
from netbox_zabbix_sync.modules.exceptions import HostgroupError
def convert_recordset(recordset):
"""Converts netbox RedcordSet to list of dicts."""
recordlist = []
for record in recordset:
recordlist.append(record.__dict__)
return recordlist
def build_path(endpoint, list_of_dicts):
"""
Builds a path list of related parent/child items.
This can be used to generate a joinable list to
be used in hostgroups.
"""
item_path = []
itemlist = [i for i in list_of_dicts if i["name"] == endpoint]
item = itemlist[0] if len(itemlist) == 1 else None
if item is None:
return []
item_path.append(item["name"])
while item["_depth"] > 0:
itemlist = [i for i in list_of_dicts if i["name"] == str(item["parent"])]
item = itemlist[0] if len(itemlist) == 1 else None
if item is None:
break
item_path.append(item["name"])
item_path.reverse()
return item_path
def proxy_prepper(proxy_list, proxy_group_list):
"""
Function that takes 2 lists and converts them using a
standardized format for further processing.
"""
output = []
for proxy in proxy_list:
proxy["type"] = "proxy"
proxy["id"] = proxy["proxyid"]
proxy["idtype"] = "proxyid"
proxy["monitored_by"] = 1
output.append(proxy)
for group in proxy_group_list:
group["type"] = "proxy_group"
group["id"] = group["proxy_groupid"]
group["idtype"] = "proxy_groupid"
group["monitored_by"] = 2
output.append(group)
return output
def cf_to_string(cf, key="name", logger=None):
"""
Converts a dict custom fields to string
"""
if isinstance(cf, dict):
if key in cf:
return cf[key]
if logger:
logger.error(
"Conversion of custom field failed, '%s' not found in cf dict.", key
)
return None
return cf
def field_mapper(host, mapper, nbdevice, logger):
"""
Maps NetBox field data to Zabbix properties.
Used for Inventory, Usermacros and Tag mappings.
"""
data = {}
# Let's build an dict for each property in the map
for nb_field, zbx_field in mapper.items():
field_list = nb_field.split("/") # convert str to list based on delimiter
# start at the base of the dict...
value = nbdevice
# ... and step through the dict till we find the needed value
for item in field_list:
value = value[item] if value else None
# Check if the result is usable and expected
# We want to apply any int or float 0 values,
# even if python thinks those are empty.
if (value and isinstance(value, int | float | str)) or (
isinstance(value, int | float) and int(value) == 0
):
data[zbx_field] = str(value)
elif not value:
# empty value should just be an empty string for API compatibility
logger.info(
"Host %s: NetBox lookup for '%s' returned an empty value.",
host,
nb_field,
)
data[zbx_field] = ""
else:
# Value is not a string or numeral, probably not what the user expected.
logger.info(
"Host %s: Lookup for '%s' returned an unexpected type: it will be skipped.",
host,
nb_field,
)
logger.debug(
"Host %s: Field mapping complete. Mapped %s field(s).",
host,
len(list(filter(None, data.values()))),
)
return data
@overload
def remove_duplicates(
input_list: list[dict[Any, Any]],
sortkey: str | Callable[[dict[str, Any]], str] | None = None,
): ...
@overload
def remove_duplicates(
input_list: dict[Any, Any],
sortkey: str | Callable[[dict[str, Any]], str] | None = None,
):
"""
deprecated: input_list as dict is deprecated, use list of dicts instead
"""
def remove_duplicates(
input_list: list[dict[Any, Any]] | dict[Any, Any],
sortkey: str | Callable[[dict[str, Any]], str] | None = None,
):
"""
Removes duplicate entries from a list and sorts the list
sortkey: Optional; key to sort the list on. Can be a string or a callable function.
"""
output_list = []
if isinstance(input_list, list):
output_list = [dict(t) for t in {tuple(d.items()) for d in input_list}]
if sortkey and isinstance(sortkey, str):
output_list.sort(key=lambda x: x[sortkey])
elif sortkey and callable(sortkey):
output_list.sort(key=cast(Any, sortkey))
return output_list
def verify_hg_format(
hg_format, device_cfs=None, vm_cfs=None, hg_type="dev", logger=None
):
"""
Verifies hostgroup field format
"""
if not device_cfs:
device_cfs = []
if not vm_cfs:
vm_cfs = []
allowed_objects = {
"dev": [
"location",
"rack",
"role",
"manufacturer",
"region",
"site",
"site_group",
"tenant",
"tenant_group",
"platform",
"cluster",
],
"vm": [
"cluster_type",
"role",
"manufacturer",
"region",
"site",
"site_group",
"tenant",
"tenant_group",
"cluster",
"device",
"platform",
],
"cfs": {"dev": [], "vm": []},
}
for cf in device_cfs:
allowed_objects["cfs"]["dev"].append(cf.name) # type: ignore[index]
for cf in vm_cfs:
allowed_objects["cfs"]["vm"].append(cf.name) # type: ignore[index]
hg_objects = []
if isinstance(hg_format, list):
for f in hg_format:
hg_objects = hg_objects + f.split("/")
else:
hg_objects = hg_format.split("/")
hg_objects = sorted(set(hg_objects))
for hg_object in hg_objects:
if (
hg_object not in allowed_objects[hg_type]
and hg_object not in allowed_objects["cfs"][hg_type] # type: ignore[index]
and not hg_object.startswith(('"', "'"))
):
e = (
f"Hostgroup item {hg_object} is not valid. Make sure you"
" use valid items and separate them with '/'."
)
if logger:
logger.warning(e)
raise HostgroupError(e)
def sanatize_log_output(data):
"""
Used for the update function to Zabbix which
shows the data that its using to update the host.
Removes any sensitive data from the input.
"""
if not isinstance(data, dict):
return data
sanitized_data = data.copy()
# Check if there are any sensitive macros defined in the data
if "macros" in data:
for macro in sanitized_data["macros"]:
# Check if macro is secret type
if not (macro["type"] == str(1) or macro["type"] == 1):
continue
macro["value"] = "********"
# Check for interface data
if "interfaceid" in data:
# Interface ID is a value which is most likely not helpful
# in logging output or for troubleshooting.
del sanitized_data["interfaceid"]
# InterfaceID also hints that this is a interface update.
# A check is required if there are no macro's used for SNMP security parameters.
if "details" not in data:
return sanitized_data
for key, detail in sanitized_data["details"].items():
# If the detail is a secret, we don't want to log it.
if key in ("authpassphrase", "privpassphrase", "securityname", "community"):
# Check if a macro is used.
# If so then logging the output is not a security issue.
if detail.startswith("{$") and detail.endswith("}"):
continue
# A macro is not used, so we sanitize the value.
sanitized_data["details"][key] = "********"
return sanitized_data
+134
View File
@@ -0,0 +1,134 @@
"""
All of the Zabbix Usermacro related configuration
"""
from logging import getLogger
from re import match
from netbox_zabbix_sync.modules.tools import field_mapper, sanatize_log_output
class ZabbixUsermacros:
"""Class that represents Zabbix usermacros."""
def __init__(self, nb, usermacro_map, usermacro_sync, logger=None, host=None):
self.nb = nb
self.name = host if host else nb.name
self.usermacro_map = usermacro_map
self.logger = logger if logger else getLogger(__name__)
self.usermacros = {}
self.usermacro_sync = usermacro_sync
self.sync = False
self.force_sync = False
self._set_config()
def __repr__(self):
return self.name
def __str__(self):
return self.__repr__()
def _set_config(self):
"""
Setup class
"""
if str(self.usermacro_sync).lower() == "full":
self.sync = True
self.force_sync = True
elif self.usermacro_sync:
self.sync = True
return True
def validate_macro(self, macro_name):
"""
Validates usermacro name
"""
pattern = r"\{\$[A-Z0-9\._]*(\:.*)?\}"
return match(pattern, macro_name)
def render_macro(self, macro_name, macro_properties):
"""
Renders a full usermacro from partial input
"""
macro = {}
macrotypes = {"text": 0, "secret": 1, "vault": 2}
if self.validate_macro(macro_name):
macro["macro"] = str(macro_name)
if isinstance(macro_properties, dict):
if "value" not in macro_properties:
self.logger.info(
"Host %s: Usermacro %s has no value in Netbox, skipping.",
self.name,
macro_name,
)
return False
macro["value"] = macro_properties["value"]
if (
"type" in macro_properties
and macro_properties["type"].lower() in macrotypes
):
macro["type"] = str(macrotypes[macro_properties["type"]])
else:
macro["type"] = str(0)
if "description" in macro_properties and isinstance(
macro_properties["description"], str
):
macro["description"] = macro_properties["description"]
else:
macro["description"] = ""
elif isinstance(macro_properties, str) and macro_properties:
macro["value"] = macro_properties
macro["type"] = str(0)
macro["description"] = ""
else:
self.logger.info(
"Host %s: Usermacro %s has no value, skipping.",
self.name,
macro_name,
)
return False
else:
self.logger.warning(
"Host %s: Usermacro %s is not a valid usermacro name, skipping.",
self.name,
macro_name,
)
return False
return macro
def generate(self):
"""
Generate full set of Usermacros
"""
macros = []
data = {}
# Parse the field mapper for usermacros
if self.usermacro_map:
self.logger.debug("Host %s: Starting usermacro mapper.", self.nb.name)
field_macros = field_mapper(
self.nb.name, self.usermacro_map, self.nb, self.logger
)
for macro, value in field_macros.items():
m = self.render_macro(macro, value)
if m:
macros.append(m)
# Parse NetBox config context for usermacros
if (
"zabbix" in self.nb.config_context
and "usermacros" in self.nb.config_context["zabbix"]
):
for macro, properties in self.nb.config_context["zabbix"][
"usermacros"
].items():
m = self.render_macro(macro, properties)
if m:
macros.append(m)
data = {"macros": macros}
self.logger.debug(
"Host %s: Resolved macros: %s", self.name, sanatize_log_output(data)
)
return macros
@@ -1,39 +1,37 @@
#!/usr/bin/env python3
# pylint: disable=duplicate-code
"""Module that hosts all functions for virtual machine processing"""
from os import sys
from modules.device import PhysicalDevice
from modules.hostgroups import Hostgroup
from modules.interface import ZabbixInterface
from modules.exceptions import TemplateError, InterfaceConfigError, SyncInventoryError
try:
from config import (
traverse_site_groups,
traverse_regions
)
except ModuleNotFoundError:
print("Configuration file config.py not found in main directory."
"Please create the file or rename the config.py.example file to config.py.")
sys.exit(0)
from netbox_zabbix_sync.modules.device import PhysicalDevice
from netbox_zabbix_sync.modules.exceptions import (
InterfaceConfigError,
SyncInventoryError,
TemplateError,
)
from netbox_zabbix_sync.modules.interface import ZabbixInterface
class VirtualMachine(PhysicalDevice):
"""Model for virtual machines"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.hostgroup = None
self.zbx_template_names = None
self.hostgroup_type = "vm"
def set_hostgroup(self, hg_format, nb_site_groups, nb_regions):
"""Set the hostgroup for this device"""
# Create new Hostgroup instance
hg = Hostgroup("vm", self.nb, self.nb_api_version, logger=self.logger)
hg.set_nesting(traverse_site_groups, traverse_regions, nb_site_groups, nb_regions)
# Generate hostgroup based on hostgroup format
self.hostgroup = hg.generate(hg_format)
def _inventory_map(self):
"""use VM inventory maps"""
return self.config["vm_inventory_map"]
def _usermacro_map(self):
"""use VM usermacro maps"""
return self.config["vm_usermacro_map"]
def _tag_map(self):
"""use VM tag maps"""
return self.config["vm_tag_map"]
def set_vm_template(self):
""" Set Template for VMs. Overwrites default class
"""Set Template for VMs. Overwrites default class
to skip a lookup of custom fields."""
# Gather templates ONLY from the device specific context
try:
@@ -42,19 +40,20 @@ class VirtualMachine(PhysicalDevice):
self.logger.warning(e)
return True
def setInterfaceDetails(self): # pylint: disable=invalid-name
def set_interface_details(self):
"""
Overwrites device function to select an agent interface type by default
Agent type interfaces are more likely to be used with VMs then SNMP
"""
zabbix_snmp_interface_type = 2
try:
# Initiate interface class
interface = ZabbixInterface(self.nb.config_context, self.ip)
# Check if Netbox has device context.
# Check if NetBox has device context.
# If not fall back to old config.
if interface.get_context():
# If device is SNMP type, add aditional information.
if interface.interface["type"] == 2:
if interface.interface["type"] == zabbix_snmp_interface_type:
interface.set_snmp()
else:
interface.set_default_agent()
+89
View File
@@ -0,0 +1,89 @@
[project]
name = "netbox-zabbix-sync"
description = "Python script to synchronize Netbox devices to Zabbix."
readme = "README.md"
requires-python = ">=3.12"
dependencies = ["igraph>=1.0.0", "pynetbox>=7.6.1", "zabbix-utils>=2.0.4"]
dynamic = ["version"]
[project.urls]
"Homepage" = "https://github.com/TheNetworkGuy/netbox-zabbix-sync"
"Issues" = "https://github.com/TheNetworkGuy/netbox-zabbix-sync/issues"
[project.scripts]
netbox-zabbix-sync = "netbox_zabbix_sync.modules.cli:parse_cli"
[build-system]
requires = ["setuptools>=64", "setuptools_scm>=8"]
build-backend = "setuptools.build_meta"
[tool.setuptools.packages.find]
include = ["netbox_zabbix_sync*"]
[tool.setuptools_scm]
version_file = "netbox_zabbix_sync/_version.py"
[tool.ruff.lint]
ignore = [
# Ignore line-length
"E501",
# Ignore too many arguments
"PLR0913",
# Ignore too many statements
"PLR0915",
# Ignore too many branches
"PLR0912",
]
select = [
# commented-out-code
"ERA001",
# flake8-bandit
"S",
# flake8-logging-format
"G",
# flake8-print
"T20",
# pep8-naming
"N",
# Pyflakes
"F",
# pycodestyle
"E",
# isort
"I",
# pep8-naming
"N",
# pyupgrade
"UP",
# flake8-2020
"YTT",
# flake8-async
"ASYNC",
# flake8-bugbear
"B",
# flake8-executable
"EXE",
# flake8-pie
"PIE",
# flake8-pyi
"PYI",
# flake8-simplify
"SIM",
# pylint
"PL",
# Ruff-specific rules
"RUF",
]
[tool.ruff.lint.per-file-ignores]
"tests/*" = [
# Ignore use of assert
"S101",
# Ignore hardcoded passwords / tokens
"S106",
]
[dependency-groups]
dev = ["pytest>=9.0.2", "pytest-cov>=7.0.0", "ruff>=0.14.14", "ty>=0.0.14"]
+22 -2
View File
@@ -1,2 +1,22 @@
pynetbox
zabbix_utils
# This file was autogenerated by uv via the following command:
# uv export --format requirements-txt --no-hashes --no-dev
certifi==2026.1.4
# via requests
charset-normalizer==3.4.4
# via requests
idna==3.11
# via requests
igraph==1.0.0
# via netbox-zabbix-sync
packaging==26.0
# via pynetbox
pynetbox==7.6.1
# via netbox-zabbix-sync
requests==2.32.5
# via pynetbox
texttable==1.7.0
# via igraph
urllib3==2.6.3
# via requests
zabbix-utils==2.0.4
# via netbox-zabbix-sync
View File
+163
View File
@@ -0,0 +1,163 @@
"""Tests for configuration parsing in the modules.config module."""
import os
from unittest.mock import MagicMock, patch
from netbox_zabbix_sync.modules.settings import (
DEFAULT_CONFIG,
load_config,
load_config_file,
load_env_variable,
)
def test_load_config_defaults():
"""Test that load_config returns default values when no config file or env vars are present"""
with (
patch(
"netbox_zabbix_sync.modules.settings.load_config_file",
return_value=DEFAULT_CONFIG.copy(),
),
patch(
"netbox_zabbix_sync.modules.settings.load_env_variable", return_value=None
),
):
config = load_config()
assert config == DEFAULT_CONFIG
assert config["templates_config_context"] is False
assert config["create_hostgroups"] is True
def test_load_config_file():
"""Test that load_config properly loads values from config file"""
mock_config = DEFAULT_CONFIG.copy()
mock_config["templates_config_context"] = True
mock_config["sync_vms"] = True
with (
patch(
"netbox_zabbix_sync.modules.settings.load_config_file",
return_value=mock_config,
),
patch(
"netbox_zabbix_sync.modules.settings.load_env_variable", return_value=None
),
):
config = load_config()
assert config["templates_config_context"] is True
assert config["sync_vms"] is True
# Unchanged values should remain as defaults
assert config["create_journal"] is False
def test_load_env_variables():
"""Test that load_config properly loads values from environment variables"""
# Mock env variable loading to return values for specific keys
def mock_load_env(key):
if key == "sync_vms":
return True
if key == "create_journal":
return True
return None
with (
patch(
"netbox_zabbix_sync.modules.settings.load_config_file",
return_value=DEFAULT_CONFIG.copy(),
),
patch(
"netbox_zabbix_sync.modules.settings.load_env_variable",
side_effect=mock_load_env,
),
):
config = load_config()
assert config["sync_vms"] is True
assert config["create_journal"] is True
# Unchanged values should remain as defaults
assert config["templates_config_context"] is False
def test_env_vars_override_config_file():
"""Test that environment variables override values from config file"""
mock_config = DEFAULT_CONFIG.copy()
mock_config["templates_config_context"] = True
mock_config["sync_vms"] = False
# Mock env variable that will override the config file value
def mock_load_env(key):
if key == "sync_vms":
return True
return None
with (
patch(
"netbox_zabbix_sync.modules.settings.load_config_file",
return_value=mock_config,
),
patch(
"netbox_zabbix_sync.modules.settings.load_env_variable",
side_effect=mock_load_env,
),
):
config = load_config()
# This should be overridden by the env var
assert config["sync_vms"] is True
# This should remain from the config file
assert config["templates_config_context"] is True
def test_load_config_file_function():
"""Test the load_config_file function directly"""
# Test when the file exists
with (
patch("pathlib.Path.exists", return_value=True),
patch("importlib.util.spec_from_file_location") as mock_spec,
):
# Setup the mock module with attributes
mock_module = MagicMock()
mock_module.templates_config_context = True
mock_module.sync_vms = True
# Setup the mock spec
mock_spec_instance = MagicMock()
mock_spec.return_value = mock_spec_instance
mock_spec_instance.loader.exec_module = lambda x: None
# Patch module_from_spec to return our mock module
with patch("importlib.util.module_from_spec", return_value=mock_module):
config = load_config_file(DEFAULT_CONFIG.copy())
assert config["templates_config_context"] is True
assert config["sync_vms"] is True
def test_load_config_file_not_found():
"""Test load_config_file when the config file doesn't exist"""
with patch("pathlib.Path.exists", return_value=False):
result = load_config_file(DEFAULT_CONFIG.copy())
# Should return a dict equal to DEFAULT_CONFIG, not a new object
assert result == DEFAULT_CONFIG
def test_load_env_variable_function():
"""Test the load_env_variable function directly"""
# Create a real environment variable for testing with correct prefix and uppercase
test_var = "NBZX_TEMPLATES_CONFIG_CONTEXT"
original_env = os.environ.get(test_var, None)
try:
# Set the environment variable with the proper prefix and case
os.environ[test_var] = "True"
# Test that it's properly read (using lowercase in the function call)
value = load_env_variable("templates_config_context")
assert value == "True"
# Test when the environment variable doesn't exist
value = load_env_variable("nonexistent_variable")
assert value is None
finally:
# Clean up - restore original environment
if original_env is not None:
os.environ[test_var] = original_env
else:
os.environ.pop(test_var, None)
+1583
View File
File diff suppressed because it is too large Load Diff
+179
View File
@@ -0,0 +1,179 @@
"""Tests for device deletion functionality in the PhysicalDevice class."""
import unittest
from unittest.mock import MagicMock, patch
from zabbix_utils import APIRequestError
from netbox_zabbix_sync.modules.device import PhysicalDevice
from netbox_zabbix_sync.modules.exceptions import SyncExternalError
class TestDeviceDeletion(unittest.TestCase):
"""Test class for device deletion functionality."""
def setUp(self):
"""Set up test fixtures."""
# Create mock NetBox device
self.mock_nb_device = MagicMock()
self.mock_nb_device.id = 123
self.mock_nb_device.name = "test-device"
self.mock_nb_device.status.label = "Decommissioning"
self.mock_nb_device.custom_fields = {"zabbix_hostid": "456"}
self.mock_nb_device.config_context = {}
# Set up a primary IP
primary_ip = MagicMock()
primary_ip.address = "192.168.1.1/24"
self.mock_nb_device.primary_ip = primary_ip
# Create mock Zabbix API
self.mock_zabbix = MagicMock()
self.mock_zabbix.version = "6.0"
# Set up mock host.get response
self.mock_zabbix.host.get.return_value = [{"hostid": "456"}]
# Mock NetBox journal class
self.mock_nb_journal = MagicMock()
# Create logger mock
self.mock_logger = MagicMock()
# Create PhysicalDevice instance with mocks
self.device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
journal=True,
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
def test_cleanup_successful_deletion(self):
"""Test successful device deletion from Zabbix."""
# Setup
self.mock_zabbix.host.get.return_value = [{"hostid": "456"}]
self.mock_zabbix.host.delete.return_value = {"hostids": ["456"]}
# Execute
self.device.cleanup()
# Verify
self.mock_zabbix.host.get.assert_called_once_with(
filter={"hostid": "456"}, output=[]
)
self.mock_zabbix.host.delete.assert_called_once_with("456")
self.mock_nb_device.save.assert_called_once()
self.assertIsNone(self.mock_nb_device.custom_fields["zabbix_hostid"])
self.mock_logger.info.assert_called_with(
f"Host {self.device.name}: Deleted host from Zabbix."
)
def test_cleanup_device_already_deleted(self):
"""Test cleanup when device is already deleted from Zabbix."""
# Setup
self.mock_zabbix.host.get.return_value = [] # Empty list means host not found
# Execute
self.device.cleanup()
# Verify
self.mock_zabbix.host.get.assert_called_once_with(
filter={"hostid": "456"}, output=[]
)
self.mock_zabbix.host.delete.assert_not_called()
self.mock_nb_device.save.assert_called_once()
self.assertIsNone(self.mock_nb_device.custom_fields["zabbix_hostid"])
self.mock_logger.info.assert_called_with(
f"Host {self.device.name}: was already deleted from Zabbix. Removed link in NetBox."
)
def test_cleanup_api_error(self):
"""Test cleanup when Zabbix API returns an error."""
# Setup
self.mock_zabbix.host.get.return_value = [{"hostid": "456"}]
self.mock_zabbix.host.delete.side_effect = APIRequestError("API Error")
# Execute and verify
with self.assertRaises(SyncExternalError):
self.device.cleanup()
# Verify correct calls were made
self.mock_zabbix.host.get.assert_called_once_with(
filter={"hostid": "456"}, output=[]
)
self.mock_zabbix.host.delete.assert_called_once_with("456")
self.mock_nb_device.save.assert_not_called()
self.mock_logger.error.assert_called()
def test_zeroize_cf(self):
"""Test _zeroize_cf method that clears the custom field."""
# Execute
self.device._zeroize_cf()
# Verify
self.assertIsNone(self.mock_nb_device.custom_fields["zabbix_hostid"])
self.mock_nb_device.save.assert_called_once()
def test_create_journal_entry(self):
"""Test create_journal_entry method."""
# Setup
test_message = "Test journal entry"
# Execute
result = self.device.create_journal_entry("info", test_message)
# Verify
self.assertTrue(result)
self.mock_nb_journal.create.assert_called_once()
journal_entry = self.mock_nb_journal.create.call_args[0][0]
self.assertEqual(journal_entry["assigned_object_type"], "dcim.device")
self.assertEqual(journal_entry["assigned_object_id"], 123)
self.assertEqual(journal_entry["kind"], "info")
self.assertEqual(journal_entry["comments"], test_message)
def test_create_journal_entry_invalid_severity(self):
"""Test create_journal_entry with invalid severity."""
# Execute
result = self.device.create_journal_entry("invalid", "Test message")
# Verify
self.assertFalse(result)
self.mock_nb_journal.create.assert_not_called()
self.mock_logger.warning.assert_called()
def test_create_journal_entry_when_disabled(self):
"""Test create_journal_entry when journaling is disabled."""
# Setup - create device with journal=False
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
journal=False, # Disable journaling
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
# Execute
result = device.create_journal_entry("info", "Test message")
# Verify
self.assertFalse(result)
self.mock_nb_journal.create.assert_not_called()
def test_cleanup_updates_journal(self):
"""Test that cleanup method creates a journal entry."""
# Setup
self.mock_zabbix.host.get.return_value = [{"hostid": "456"}]
# Execute
with patch.object(self.device, "create_journal_entry") as mock_journal_entry:
self.device.cleanup()
# Verify
mock_journal_entry.assert_called_once_with(
"warning", "Deleted host from Zabbix"
)
+157
View File
@@ -0,0 +1,157 @@
"""Tests for the Description class in the host_description module."""
import unittest
from unittest.mock import MagicMock, patch
from netbox_zabbix_sync.modules.host_description import Description
class TestDescription(unittest.TestCase):
"""Test class for Description functionality."""
def setUp(self):
"""Set up test fixtures."""
# Create mock NetBox object
self.mock_nb_object = MagicMock()
self.mock_nb_object.name = "test-host"
self.mock_nb_object.owner = "admin"
self.mock_nb_object.config_context = {}
# Create logger mock
self.mock_logger = MagicMock()
# Base configuration
self.base_config = {}
# Test 1: Config context description override
@patch("netbox_zabbix_sync.modules.host_description.datetime")
def test_1_config_context_override_value(self, mock_datetime):
"""Test 1: User that provides a config context description value should get this override value back."""
mock_now = MagicMock()
mock_now.strftime.return_value = "2026-02-25 10:30:00"
mock_datetime.now.return_value = mock_now
# Set config context with description
self.mock_nb_object.config_context = {
"zabbix": {"description": "Custom override for {owner}"}
}
config = {"description": "static"}
desc = Description(self.mock_nb_object, config, "4.5", logger=self.mock_logger)
result = desc.generate()
# Should use config context, not config
self.assertEqual(result, "Custom override for admin")
# Test 2: Static description
def test_2_static_description(
self,
):
"""Test 2: User that provides static as description should get the default static value."""
config = {"description": "static"}
desc = Description(self.mock_nb_object, config, "4.5", logger=self.mock_logger)
result = desc.generate()
self.assertEqual(result, "Host added by NetBox sync script.")
# Test 3: Dynamic description
@patch("netbox_zabbix_sync.modules.host_description.datetime")
def test_3_dynamic_description(self, mock_datetime):
"""Test 3: User that provides 'dynamic' should get the resolved description string back."""
mock_now = MagicMock()
mock_now.strftime.return_value = "2026-02-25 10:30:00"
mock_datetime.now.return_value = mock_now
config = {"description": "dynamic"}
desc = Description(self.mock_nb_object, config, "4.5", logger=self.mock_logger)
result = desc.generate()
expected = (
"Host by owner admin added by NetBox sync script on 2026-02-25 10:30:00."
)
self.assertEqual(result, expected)
# Test 4: Invalid macro fallback
def test_4_invalid_macro_fallback_to_static(self):
"""Test 4: Users who provide invalid macros should fallback to the static variant."""
config = {"description": "Host {owner} with {invalid_macro}"}
desc = Description(self.mock_nb_object, config, "4.5", logger=self.mock_logger)
result = desc.generate()
# Should fall back to static default
self.assertEqual(result, "Host added by NetBox sync script.")
# Verify warning was logged
self.mock_logger.warning.assert_called_once()
# Test 5: Custom time format
@patch("netbox_zabbix_sync.modules.host_description.datetime")
def test_5_custom_datetime_format(self, mock_datetime):
"""Test 5: Users who change the time format."""
mock_now = MagicMock()
# Will be called twice: once with custom format, once for string
mock_now.strftime.side_effect = ["25/02/2026", "25/02/2026"]
mock_datetime.now.return_value = mock_now
config = {
"description": "Updated on {datetime}",
"description_dt_format": "%d/%m/%Y",
}
desc = Description(self.mock_nb_object, config, "4.5", logger=self.mock_logger)
result = desc.generate()
self.assertEqual(result, "Updated on 25/02/2026")
# Test 6: Custom description format in config
@patch("netbox_zabbix_sync.modules.host_description.datetime")
def test_6_custom_description_format(self, mock_datetime):
"""Test 6: Users who provide a custom description format in the config."""
mock_now = MagicMock()
mock_now.strftime.return_value = "2026-02-25 10:30:00"
mock_datetime.now.return_value = mock_now
config = {"description": "Server {owner} managed at {datetime}"}
desc = Description(self.mock_nb_object, config, "4.5", logger=self.mock_logger)
result = desc.generate()
self.assertEqual(result, "Server admin managed at 2026-02-25 10:30:00")
# Test 7: Owner on lower NetBox version
@patch("netbox_zabbix_sync.modules.host_description.datetime")
def test_7_owner_on_lower_netbox_version(self, mock_datetime):
"""Test 7: Users who try to resolve the owner property on a lower NetBox version (3.2)."""
mock_now = MagicMock()
mock_now.strftime.return_value = "2026-02-25 10:30:00"
mock_datetime.now.return_value = mock_now
config = {"description": "Device owned by {owner}"}
desc = Description(
self.mock_nb_object,
config,
"3.2", # Lower NetBox version
logger=self.mock_logger,
)
result = desc.generate()
# Owner should be empty string on version < 4.5
self.assertEqual(result, "Device owned by ")
# Test 8: Missing or False description returns static
def test_8a_missing_description_returns_static(self):
"""Test 8a: When description option is not found, script should return the static variant."""
config = {} # No description key
desc = Description(self.mock_nb_object, config, "4.5", logger=self.mock_logger)
result = desc.generate()
self.assertEqual(result, "Host added by NetBox sync script.")
def test_8b_false_description_returns_empty(self):
"""Test 8b: When description is set to False, script should return empty string."""
config = {"description": False}
desc = Description(self.mock_nb_object, config, "4.5", logger=self.mock_logger)
result = desc.generate()
self.assertEqual(result, "")
if __name__ == "__main__":
unittest.main()
+463
View File
@@ -0,0 +1,463 @@
"""Tests for the Hostgroup class in the hostgroups module."""
import unittest
from unittest.mock import MagicMock, patch
from netbox_zabbix_sync.modules.exceptions import HostgroupError
from netbox_zabbix_sync.modules.hostgroups import Hostgroup
class TestHostgroups(unittest.TestCase):
"""Test class for Hostgroup functionality."""
def setUp(self):
"""Set up test fixtures."""
# Create mock logger
self.mock_logger = MagicMock()
# *** Mock NetBox Device setup ***
# Create mock device with all properties
self.mock_device = MagicMock()
self.mock_device.name = "test-device"
# Set up site information
site = MagicMock()
site.name = "TestSite"
# Set up region information
region = MagicMock()
region.name = "TestRegion"
# Ensure region string representation returns the name
region.__str__.return_value = "TestRegion"
site.region = region
# Set up site group information
site_group = MagicMock()
site_group.name = "TestSiteGroup"
# Ensure site group string representation returns the name
site_group.__str__.return_value = "TestSiteGroup"
site.group = site_group
self.mock_device.site = site
# Set up role information (varies based on NetBox version)
self.mock_device_role = MagicMock()
self.mock_device_role.name = "TestRole"
# Ensure string representation returns the name
self.mock_device_role.__str__.return_value = "TestRole"
self.mock_device.device_role = self.mock_device_role
self.mock_device.role = self.mock_device_role
# Set up tenant information
tenant = MagicMock()
tenant.name = "TestTenant"
# Ensure tenant string representation returns the name
tenant.__str__.return_value = "TestTenant"
tenant_group = MagicMock()
tenant_group.name = "TestTenantGroup"
# Ensure tenant group string representation returns the name
tenant_group.__str__.return_value = "TestTenantGroup"
tenant.group = tenant_group
self.mock_device.tenant = tenant
# Set up platform information
platform = MagicMock()
platform.name = "TestPlatform"
self.mock_device.platform = platform
# Device-specific properties
device_type = MagicMock()
manufacturer = MagicMock()
manufacturer.name = "TestManufacturer"
device_type.manufacturer = manufacturer
self.mock_device.device_type = device_type
location = MagicMock()
location.name = "TestLocation"
# Ensure location string representation returns the name
location.__str__.return_value = "TestLocation"
self.mock_device.location = location
rack = MagicMock()
rack.name = "TestRack"
self.mock_device.rack = rack
# Custom fields — empty_cf is intentionally None to test the empty CF path
self.mock_device.custom_fields = {"test_cf": "TestCF", "empty_cf": None}
# *** Mock NetBox VM setup ***
# Create mock VM with all properties
self.mock_vm = MagicMock()
self.mock_vm.name = "test-vm"
# Reuse site from device
self.mock_vm.site = site
# Set up role for VM
self.mock_vm.role = self.mock_device_role
# Set up tenant for VM (same as device)
self.mock_vm.tenant = tenant
# Set up platform for VM (same as device)
self.mock_vm.platform = platform
# VM-specific properties
cluster = MagicMock()
cluster.name = "TestCluster"
cluster_type = MagicMock()
cluster_type.name = "TestClusterType"
cluster.type = cluster_type
self.mock_vm.cluster = cluster
# Custom fields
self.mock_vm.custom_fields = {"test_cf": "TestCF"}
# Mock data for nesting tests
self.mock_regions_data = [
{"name": "ParentRegion", "parent": None, "_depth": 0},
{"name": "TestRegion", "parent": "ParentRegion", "_depth": 1},
]
self.mock_groups_data = [
{"name": "ParentSiteGroup", "parent": None, "_depth": 0},
{"name": "TestSiteGroup", "parent": "ParentSiteGroup", "_depth": 1},
]
def test_device_hostgroup_creation(self):
"""Test basic device hostgroup creation."""
hostgroup = Hostgroup("dev", self.mock_device, "4.0", self.mock_logger)
# Test the string representation
self.assertEqual(str(hostgroup), "Hostgroup for dev test-device")
# Check format options were set correctly
self.assertEqual(hostgroup.format_options["site"], "TestSite")
self.assertEqual(hostgroup.format_options["region"], "TestRegion")
self.assertEqual(hostgroup.format_options["site_group"], "TestSiteGroup")
self.assertEqual(hostgroup.format_options["role"], "TestRole")
self.assertEqual(hostgroup.format_options["tenant"], "TestTenant")
self.assertEqual(hostgroup.format_options["tenant_group"], "TestTenantGroup")
self.assertEqual(hostgroup.format_options["platform"], "TestPlatform")
self.assertEqual(hostgroup.format_options["manufacturer"], "TestManufacturer")
self.assertEqual(hostgroup.format_options["location"], "TestLocation")
self.assertEqual(hostgroup.format_options["rack"], "TestRack")
def test_vm_hostgroup_creation(self):
"""Test basic VM hostgroup creation."""
hostgroup = Hostgroup("vm", self.mock_vm, "4.0", self.mock_logger)
# Test the string representation
self.assertEqual(str(hostgroup), "Hostgroup for vm test-vm")
# Check format options were set correctly
self.assertEqual(hostgroup.format_options["site"], "TestSite")
self.assertEqual(hostgroup.format_options["region"], "TestRegion")
self.assertEqual(hostgroup.format_options["site_group"], "TestSiteGroup")
self.assertEqual(hostgroup.format_options["role"], "TestRole")
self.assertEqual(hostgroup.format_options["tenant"], "TestTenant")
self.assertEqual(hostgroup.format_options["tenant_group"], "TestTenantGroup")
self.assertEqual(hostgroup.format_options["platform"], "TestPlatform")
self.assertEqual(hostgroup.format_options["cluster"], "TestCluster")
self.assertEqual(hostgroup.format_options["cluster_type"], "TestClusterType")
def test_invalid_object_type(self):
"""Test that an invalid object type raises an exception."""
with self.assertRaises(HostgroupError):
Hostgroup("invalid", self.mock_device, "4.0", self.mock_logger)
def test_device_hostgroup_formats(self):
"""Test different hostgroup formats for devices."""
hostgroup = Hostgroup("dev", self.mock_device, "4.0", self.mock_logger)
# Custom format: site/region
custom_result = hostgroup.generate("site/region")
self.assertEqual(custom_result, "TestSite/TestRegion")
# Custom format: site/tenant/platform/location
complex_result = hostgroup.generate("site/tenant/platform/location")
self.assertEqual(
complex_result, "TestSite/TestTenant/TestPlatform/TestLocation"
)
def test_vm_hostgroup_formats(self):
"""Test different hostgroup formats for VMs."""
hostgroup = Hostgroup("vm", self.mock_vm, "4.0", self.mock_logger)
# Default format: cluster/role
default_result = hostgroup.generate("cluster/role")
self.assertEqual(default_result, "TestCluster/TestRole")
# Custom format: site/tenant
custom_result = hostgroup.generate("site/tenant")
self.assertEqual(custom_result, "TestSite/TestTenant")
# Custom format: cluster/cluster_type/platform
complex_result = hostgroup.generate("cluster/cluster_type/platform")
self.assertEqual(complex_result, "TestCluster/TestClusterType/TestPlatform")
def test_device_netbox_version_differences(self):
"""Test hostgroup generation with different NetBox versions.
device_role (v2/v3) and role (v4+) are set to different values so the
test can verify that the correct attribute is read for each version.
"""
# Build a device with deliberately different names on each role attribute
versioned_device = MagicMock()
versioned_device.name = "versioned-device"
versioned_device.site = self.mock_device.site
versioned_device.tenant = self.mock_device.tenant
versioned_device.platform = self.mock_device.platform
versioned_device.location = self.mock_device.location
versioned_device.rack = self.mock_device.rack
versioned_device.device_type = self.mock_device.device_type
versioned_device.custom_fields = self.mock_device.custom_fields
old_role = MagicMock()
old_role.name = "OldRole"
new_role = MagicMock()
new_role.name = "NewRole"
versioned_device.device_role = old_role # read by NetBox v2 / v3 code path
versioned_device.role = new_role # read by NetBox v4+ code path
# v2 must use device_role
hostgroup_v2 = Hostgroup("dev", versioned_device, "2.11", self.mock_logger)
self.assertEqual(hostgroup_v2.format_options["role"], "OldRole")
# v3 must also use device_role
hostgroup_v3 = Hostgroup("dev", versioned_device, "3.5", self.mock_logger)
self.assertEqual(hostgroup_v3.format_options["role"], "OldRole")
# v4+ must use role
hostgroup_v4 = Hostgroup("dev", versioned_device, "4.0", self.mock_logger)
self.assertEqual(hostgroup_v4.format_options["role"], "NewRole")
def test_custom_field_lookup(self):
"""Test custom field lookup functionality."""
hostgroup = Hostgroup("dev", self.mock_device, "4.0", self.mock_logger)
# Test custom field exists and is populated
cf_result = hostgroup.custom_field_lookup("test_cf")
self.assertTrue(cf_result["result"])
self.assertEqual(cf_result["cf"], "TestCF")
# Test custom field doesn't exist
cf_result = hostgroup.custom_field_lookup("nonexistent_cf")
self.assertFalse(cf_result["result"])
self.assertIsNone(cf_result["cf"])
# Test custom field exists but has no value (None)
cf_result = hostgroup.custom_field_lookup("empty_cf")
self.assertTrue(cf_result["result"]) # key is present
self.assertIsNone(cf_result["cf"]) # value is empty
def test_hostgroup_with_custom_field(self):
"""Test hostgroup generation including a custom field."""
hostgroup = Hostgroup("dev", self.mock_device, "4.0", self.mock_logger)
# Generate with custom field included
result = hostgroup.generate("site/test_cf/role")
self.assertEqual(result, "TestSite/TestCF/TestRole")
def test_missing_hostgroup_format_item(self):
"""Test handling of missing hostgroup format items."""
# Create a device with minimal attributes
minimal_device = MagicMock()
minimal_device.name = "minimal-device"
minimal_device.site = None
minimal_device.tenant = None
minimal_device.platform = None
minimal_device.custom_fields = {}
# Create role
role = MagicMock()
role.name = "MinimalRole"
minimal_device.role = role
# Create device_type with manufacturer
device_type = MagicMock()
manufacturer = MagicMock()
manufacturer.name = "MinimalManufacturer"
device_type.manufacturer = manufacturer
minimal_device.device_type = device_type
# Create hostgroup
hostgroup = Hostgroup("dev", minimal_device, "4.0", self.mock_logger)
# Generate with default format
result = hostgroup.generate("site/manufacturer/role")
# Site is missing, so only manufacturer and role should be included
self.assertEqual(result, "MinimalManufacturer/MinimalRole")
# Test with invalid format
with self.assertRaises(HostgroupError):
hostgroup.generate("site/nonexistent/role")
def test_nested_region_hostgroups(self):
"""Test hostgroup generation with nested regions."""
# Mock the build_path function to return a predictable result
with patch(
"netbox_zabbix_sync.modules.hostgroups.build_path"
) as mock_build_path:
# Configure the mock to return a list of regions in the path
mock_build_path.return_value = ["ParentRegion", "TestRegion"]
# Create hostgroup with nested regions enabled
hostgroup = Hostgroup(
"dev",
self.mock_device,
"4.0",
self.mock_logger,
nested_region_flag=True,
nb_regions=self.mock_regions_data,
)
# Generate hostgroup with region
result = hostgroup.generate("site/region/role")
# Should include the parent region
self.assertEqual(result, "TestSite/ParentRegion/TestRegion/TestRole")
def test_nested_sitegroup_hostgroups(self):
"""Test hostgroup generation with nested site groups."""
# Mock the build_path function to return a predictable result
with patch(
"netbox_zabbix_sync.modules.hostgroups.build_path"
) as mock_build_path:
# Configure the mock to return a list of site groups in the path
mock_build_path.return_value = ["ParentSiteGroup", "TestSiteGroup"]
# Create hostgroup with nested site groups enabled
hostgroup = Hostgroup(
"dev",
self.mock_device,
"4.0",
self.mock_logger,
nested_sitegroup_flag=True,
nb_groups=self.mock_groups_data,
)
# Generate hostgroup with site_group
result = hostgroup.generate("site/site_group/role")
# Should include the parent site group
self.assertEqual(result, "TestSite/ParentSiteGroup/TestSiteGroup/TestRole")
def test_vm_list_based_hostgroup_format(self):
"""Test VM hostgroup generation with a list-based format."""
hostgroup = Hostgroup("vm", self.mock_vm, "4.0", self.mock_logger)
# Test with a list of format strings
format_list = ["platform", "role", "cluster_type/cluster"]
# Generate hostgroups for each format in the list
hostgroups = []
for fmt in format_list:
result = hostgroup.generate(fmt)
if result: # Only add non-None results
hostgroups.append(result)
# Verify each expected hostgroup is generated
self.assertEqual(len(hostgroups), 3) # Should have 3 hostgroups
self.assertIn("TestPlatform", hostgroups)
self.assertIn("TestRole", hostgroups)
self.assertIn("TestClusterType/TestCluster", hostgroups)
def test_nested_format_splitting(self):
"""Test that formats with slashes correctly split and resolve each component."""
hostgroup = Hostgroup("vm", self.mock_vm, "4.0", self.mock_logger)
# Test a format with slashes that should be split
complex_format = "cluster_type/cluster"
result = hostgroup.generate(complex_format)
# Verify the format is correctly split and each component resolved
self.assertEqual(result, "TestClusterType/TestCluster")
def test_multiple_hostgroup_formats_device(self):
"""Test device hostgroup generation with multiple formats."""
hostgroup = Hostgroup("dev", self.mock_device, "4.0", self.mock_logger)
# Test with various formats that would be in a list
formats = [
"site",
"manufacturer/role",
"platform/location",
"tenant_group/tenant",
]
# Generate and check each format
results = {}
for fmt in formats:
results[fmt] = hostgroup.generate(fmt)
# Verify results
self.assertEqual(results["site"], "TestSite")
self.assertEqual(results["manufacturer/role"], "TestManufacturer/TestRole")
self.assertEqual(results["platform/location"], "TestPlatform/TestLocation")
self.assertEqual(results["tenant_group/tenant"], "TestTenantGroup/TestTenant")
def test_literal_string_in_format(self):
"""Test that quoted literal strings in a format are used verbatim."""
hostgroup = Hostgroup("dev", self.mock_device, "4.0", self.mock_logger)
# Single-quoted literal
result = hostgroup.generate("'MyDevices'/role")
self.assertEqual(result, "MyDevices/TestRole")
# Double-quoted literal
result = hostgroup.generate('"MyDevices"/role')
self.assertEqual(result, "MyDevices/TestRole")
def test_generate_returns_none_when_all_fields_empty(self):
"""Test that generate() returns None when every format field resolves to no value."""
empty_device = MagicMock()
empty_device.name = "empty-device"
empty_device.site = None
empty_device.tenant = None
empty_device.platform = None
empty_device.role = None
empty_device.location = None
empty_device.rack = None
empty_device.custom_fields = {}
device_type = MagicMock()
manufacturer = MagicMock()
manufacturer.name = "SomeManufacturer"
device_type.manufacturer = manufacturer
empty_device.device_type = device_type
hostgroup = Hostgroup("dev", empty_device, "4.0", self.mock_logger)
# site, tenant and platform all have no value → hg_output stays empty → None
result = hostgroup.generate("site/tenant/platform")
self.assertIsNone(result)
def test_vm_without_cluster(self):
"""Test that cluster/cluster_type are absent from format_options when VM has no cluster."""
clusterless_vm = MagicMock()
clusterless_vm.name = "clusterless-vm"
clusterless_vm.site = self.mock_vm.site
clusterless_vm.tenant = self.mock_vm.tenant
clusterless_vm.platform = self.mock_vm.platform
clusterless_vm.role = self.mock_device_role
clusterless_vm.cluster = None
clusterless_vm.custom_fields = {}
hostgroup = Hostgroup("vm", clusterless_vm, "4.0", self.mock_logger)
# cluster and cluster_type must not appear in format_options
self.assertNotIn("cluster", hostgroup.format_options)
self.assertNotIn("cluster_type", hostgroup.format_options)
# Requesting cluster in a format must raise HostgroupError
with self.assertRaises(HostgroupError):
hostgroup.generate("cluster/role")
def test_empty_custom_field_skipped_in_format(self):
"""Test that an empty (None) custom field is silently omitted from the hostgroup name."""
hostgroup = Hostgroup("dev", self.mock_device, "4.0", self.mock_logger)
# empty_cf has no value → it is skipped; only site and role appear
result = hostgroup.generate("site/empty_cf/role")
self.assertEqual(result, "TestSite/TestRole")
if __name__ == "__main__":
unittest.main()
+240
View File
@@ -0,0 +1,240 @@
"""Tests for the ZabbixInterface class in the interface module."""
import unittest
from typing import cast
from netbox_zabbix_sync.modules.exceptions import InterfaceConfigError
from netbox_zabbix_sync.modules.interface import ZabbixInterface
class TestZabbixInterface(unittest.TestCase):
"""Test class for ZabbixInterface functionality."""
def setUp(self):
"""Set up test fixtures."""
self.test_ip = "192.168.1.1"
self.empty_context = {}
self.default_interface = ZabbixInterface(self.empty_context, self.test_ip)
# Create some test contexts for different scenarios
self.snmpv2_context = {
"zabbix": {
"interface_type": 2,
"interface_port": "161",
"snmp": {"version": 2, "community": "public", "bulk": 1},
}
}
self.snmpv3_context = {
"zabbix": {
"interface_type": 2,
"snmp": {
"version": 3,
"securityname": "snmpuser",
"securitylevel": "authPriv",
"authprotocol": "SHA",
"authpassphrase": "authpass123",
"privprotocol": "AES",
"privpassphrase": "privpass123",
"contextname": "context1",
},
}
}
self.agent_context = {
"zabbix": {"interface_type": 1, "interface_port": "10050"}
}
def test_init(self):
"""Test initialization of ZabbixInterface."""
interface = ZabbixInterface(self.empty_context, self.test_ip)
# Check basic properties
self.assertEqual(interface.ip, self.test_ip)
self.assertEqual(interface.context, self.empty_context)
self.assertEqual(interface.interface["ip"], self.test_ip)
self.assertEqual(interface.interface["main"], "1")
self.assertEqual(interface.interface["useip"], "1")
self.assertEqual(interface.interface["dns"], "")
def test_get_context_empty(self):
"""Test get_context with empty context."""
interface = ZabbixInterface(self.empty_context, self.test_ip)
result = interface.get_context()
self.assertFalse(result)
def test_get_context_with_interface_type(self):
"""Test get_context with interface_type but no port."""
context = {"zabbix": {"interface_type": 2}}
interface = ZabbixInterface(context, self.test_ip)
# Should set type and default port
result = interface.get_context()
self.assertTrue(result)
self.assertEqual(interface.interface["type"], 2)
self.assertEqual(interface.interface["port"], "161") # Default port for SNMP
def test_get_context_with_interface_type_and_port(self):
"""Test get_context with both interface_type and port."""
context = {"zabbix": {"interface_type": 1, "interface_port": "12345"}}
interface = ZabbixInterface(context, self.test_ip)
# Should set type and specified port
result = interface.get_context()
self.assertTrue(result)
self.assertEqual(interface.interface["type"], 1)
self.assertEqual(interface.interface["port"], "12345")
def test_set_default_port(self):
"""Test _set_default_port for different interface types."""
interface = ZabbixInterface(self.empty_context, self.test_ip)
# Test for agent type (1)
interface.interface["type"] = 1
interface._set_default_port()
self.assertEqual(interface.interface["port"], "10050")
# Test for SNMP type (2)
interface.interface["type"] = 2
interface._set_default_port()
self.assertEqual(interface.interface["port"], "161")
# Test for IPMI type (3)
interface.interface["type"] = 3
interface._set_default_port()
self.assertEqual(interface.interface["port"], "623")
# Test for JMX type (4)
interface.interface["type"] = 4
interface._set_default_port()
self.assertEqual(interface.interface["port"], "12345")
# Test for unsupported type
interface.interface["type"] = 99
result = interface._set_default_port()
self.assertFalse(result)
def test_set_snmp_v2(self):
"""Test set_snmp with SNMPv2 configuration."""
interface = ZabbixInterface(self.snmpv2_context, self.test_ip)
interface.get_context() # Set the interface type
# Call set_snmp
interface.set_snmp()
# Check SNMP details
details = cast(dict[str, str], interface.interface["details"])
self.assertEqual(details["version"], "2")
self.assertEqual(details["community"], "public")
self.assertEqual(details["bulk"], "1")
def test_set_snmp_v3(self):
"""Test set_snmp with SNMPv3 configuration."""
interface = ZabbixInterface(self.snmpv3_context, self.test_ip)
interface.get_context() # Set the interface type
# Call set_snmp
interface.set_snmp()
# Check SNMP details
details = cast(dict[str, str], interface.interface["details"])
self.assertEqual(details["version"], "3")
self.assertEqual(details["securityname"], "snmpuser")
self.assertEqual(details["securitylevel"], "authPriv")
self.assertEqual(details["authprotocol"], "SHA")
self.assertEqual(details["authpassphrase"], "authpass123")
self.assertEqual(details["privprotocol"], "AES")
self.assertEqual(details["privpassphrase"], "privpass123")
self.assertEqual(details["contextname"], "context1")
def test_set_snmp_no_snmp_config(self):
"""Test set_snmp with missing SNMP configuration."""
# Create context with interface type but no SNMP config
context = {"zabbix": {"interface_type": 2}}
interface = ZabbixInterface(context, self.test_ip)
interface.get_context() # Set the interface type
# Call set_snmp - should raise exception
with self.assertRaises(InterfaceConfigError):
interface.set_snmp()
def test_set_snmp_unsupported_version(self):
"""Test set_snmp with unsupported SNMP version."""
# Create context with invalid SNMP version
context = {
"zabbix": {
"interface_type": 2,
"snmp": {
"version": 4 # Invalid version
},
}
}
interface = ZabbixInterface(context, self.test_ip)
interface.get_context() # Set the interface type
# Call set_snmp - should raise exception
with self.assertRaises(InterfaceConfigError):
interface.set_snmp()
def test_set_snmp_no_version(self):
"""Test set_snmp with missing SNMP version."""
# Create context without SNMP version
context = {
"zabbix": {
"interface_type": 2,
"snmp": {
"community": "public" # No version specified
},
}
}
interface = ZabbixInterface(context, self.test_ip)
interface.get_context() # Set the interface type
# Call set_snmp - should raise exception
with self.assertRaises(InterfaceConfigError):
interface.set_snmp()
def test_set_snmp_non_snmp_interface(self):
"""Test set_snmp with non-SNMP interface type."""
interface = ZabbixInterface(self.agent_context, self.test_ip)
interface.get_context() # Set the interface type
# Call set_snmp - should raise exception
with self.assertRaises(InterfaceConfigError):
interface.set_snmp()
def test_set_default_snmp(self):
"""Test set_default_snmp method."""
interface = ZabbixInterface(self.empty_context, self.test_ip)
interface.set_default_snmp()
# Check interface properties
self.assertEqual(interface.interface["type"], "2")
self.assertEqual(interface.interface["port"], "161")
details = cast(dict[str, str], interface.interface["details"])
self.assertEqual(details["version"], "2")
self.assertEqual(details["community"], "{$SNMP_COMMUNITY}")
self.assertEqual(details["bulk"], "1")
def test_set_default_agent(self):
"""Test set_default_agent method."""
interface = ZabbixInterface(self.empty_context, self.test_ip)
interface.set_default_agent()
# Check interface properties
self.assertEqual(interface.interface["type"], "1")
self.assertEqual(interface.interface["port"], "10050")
def test_snmpv2_no_community(self):
"""Test SNMPv2 with no community string specified."""
# Create context with SNMPv2 but no community
context = {"zabbix": {"interface_type": 2, "snmp": {"version": 2}}}
interface = ZabbixInterface(context, self.test_ip)
interface.get_context() # Set the interface type
# Call set_snmp
interface.set_snmp()
# Should use default community string
details = cast(dict[str, str], interface.interface["details"])
self.assertEqual(details["community"], "{$SNMP_COMMUNITY}")
+139
View File
@@ -0,0 +1,139 @@
"""Tests for list-based hostgroup formats in configuration."""
import unittest
from unittest.mock import MagicMock
from netbox_zabbix_sync.modules.exceptions import HostgroupError
from netbox_zabbix_sync.modules.hostgroups import Hostgroup
from netbox_zabbix_sync.modules.tools import verify_hg_format
class TestListHostgroupFormats(unittest.TestCase):
"""Test class for list-based hostgroup format functionality."""
def setUp(self):
"""Set up test fixtures."""
# Create mock logger
self.mock_logger = MagicMock()
# Create mock device
self.mock_device = MagicMock()
self.mock_device.name = "test-device"
# Set up site information
site = MagicMock()
site.name = "TestSite"
# Set up region information
region = MagicMock()
region.name = "TestRegion"
region.__str__.return_value = "TestRegion"
site.region = region
# Set device site
self.mock_device.site = site
# Set up role information
self.mock_device_role = MagicMock()
self.mock_device_role.name = "TestRole"
self.mock_device_role.__str__.return_value = "TestRole"
self.mock_device.role = self.mock_device_role
# Set up rack information
rack = MagicMock()
rack.name = "TestRack"
self.mock_device.rack = rack
# Set up platform information
platform = MagicMock()
platform.name = "TestPlatform"
self.mock_device.platform = platform
# Device-specific properties
device_type = MagicMock()
manufacturer = MagicMock()
manufacturer.name = "TestManufacturer"
device_type.manufacturer = manufacturer
self.mock_device.device_type = device_type
# Create mock VM
self.mock_vm = MagicMock()
self.mock_vm.name = "test-vm"
# Reuse site from device
self.mock_vm.site = site
# Set up role for VM
self.mock_vm.role = self.mock_device_role
# Set up platform for VM
self.mock_vm.platform = platform
# VM-specific properties
cluster = MagicMock()
cluster.name = "TestCluster"
cluster_type = MagicMock()
cluster_type.name = "TestClusterType"
cluster.type = cluster_type
self.mock_vm.cluster = cluster
def test_verify_list_based_hostgroup_format(self):
"""Test verification of list-based hostgroup formats."""
# List format with valid items
valid_format = ["region", "site", "rack"]
# List format with nested path
valid_nested_format = ["region", "site/rack"]
# List format with invalid item
invalid_format = ["region", "invalid_item", "rack"]
# Should not raise exception for valid formats
verify_hg_format(valid_format, hg_type="dev", logger=self.mock_logger)
verify_hg_format(valid_nested_format, hg_type="dev", logger=self.mock_logger)
# Should raise exception for invalid format
with self.assertRaises(HostgroupError):
verify_hg_format(invalid_format, hg_type="dev", logger=self.mock_logger)
def test_simulate_hostgroup_generation_from_config(self):
"""Simulate how the main script would generate hostgroups from list-based config."""
# Mock configuration with list-based hostgroup format
config_format = ["region", "site", "rack"]
hostgroup = Hostgroup("dev", self.mock_device, "4.0", self.mock_logger)
# Simulate the main script's hostgroup generation process
hostgroups = []
for fmt in config_format:
result = hostgroup.generate(fmt)
if result:
hostgroups.append(result)
# Check results
self.assertEqual(len(hostgroups), 3)
self.assertIn("TestRegion", hostgroups)
self.assertIn("TestSite", hostgroups)
self.assertIn("TestRack", hostgroups)
def test_vm_hostgroup_format_from_config(self):
"""Test VM hostgroup generation with list-based format."""
# Mock VM configuration with mixed format
config_format = ["platform", "role", "cluster_type/cluster"]
hostgroup = Hostgroup("vm", self.mock_vm, "4.0", self.mock_logger)
# Simulate the main script's hostgroup generation process
hostgroups = []
for fmt in config_format:
result = hostgroup.generate(fmt)
if result:
hostgroups.append(result)
# Check results
self.assertEqual(len(hostgroups), 3)
self.assertIn("TestPlatform", hostgroups)
self.assertIn("TestRole", hostgroups)
self.assertIn("TestClusterType/TestCluster", hostgroups)
if __name__ == "__main__":
unittest.main()
+392
View File
@@ -0,0 +1,392 @@
"""Tests for the PhysicalDevice class in the device module."""
import unittest
from unittest.mock import MagicMock, patch
from netbox_zabbix_sync.modules.device import PhysicalDevice
from netbox_zabbix_sync.modules.exceptions import TemplateError
class TestPhysicalDevice(unittest.TestCase):
"""Test class for PhysicalDevice functionality."""
def setUp(self):
"""Set up test fixtures."""
# Create mock NetBox device
self.mock_nb_device = MagicMock()
self.mock_nb_device.id = 123
self.mock_nb_device.name = "test-device"
self.mock_nb_device.status.label = "Active"
self.mock_nb_device.custom_fields = {"zabbix_hostid": None}
self.mock_nb_device.config_context = {}
# Set up a primary IP
primary_ip = MagicMock()
primary_ip.address = "192.168.1.1/24"
self.mock_nb_device.primary_ip = primary_ip
# Create mock Zabbix API
self.mock_zabbix = MagicMock()
self.mock_zabbix.version = "6.0"
# Mock NetBox journal class
self.mock_nb_journal = MagicMock()
# Create logger mock
self.mock_logger = MagicMock()
# Create PhysicalDevice instance with mocks
self.device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
journal=True,
logger=self.mock_logger,
config={
"device_cf": "zabbix_hostid",
"template_cf": "zabbix_template",
"templates_config_context": False,
"templates_config_context_overrule": False,
"traverse_regions": False,
"traverse_site_groups": False,
"inventory_mode": "disabled",
"inventory_sync": False,
"device_inventory_map": {},
},
)
def test_init(self):
"""Test the initialization of the PhysicalDevice class."""
# Check that basic properties are set correctly
self.assertEqual(self.device.name, "test-device")
self.assertEqual(self.device.id, 123)
self.assertEqual(self.device.status, "Active")
self.assertEqual(self.device.ip, "192.168.1.1")
self.assertEqual(self.device.cidr, "192.168.1.1/24")
def test_set_basics_with_special_characters(self):
"""Test _setBasics when device name contains special characters."""
# Set name with special characters that
# will actually trigger the special character detection
self.mock_nb_device.name = "test-devïce"
# We need to patch the search function to simulate finding special characters
with patch("netbox_zabbix_sync.modules.device.search") as mock_search:
# Make the search function return True to simulate special characters
mock_search.return_value = True
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
# With the mocked search function, the name should be changed to NETBOX_ID format
self.assertEqual(device.name, f"NETBOX_ID{self.mock_nb_device.id}")
# And visible_name should be set to the original name
self.assertEqual(device.visible_name, "test-devïce")
# use_visible_name flag should be set
self.assertTrue(device.use_visible_name)
def test_get_templates_context(self):
"""Test get_templates_context with valid config."""
# Set up config_context with valid template data
self.mock_nb_device.config_context = {
"zabbix": {"templates": ["Template1", "Template2"]}
}
# Create device with the updated mock
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
# Test that templates are returned correctly
templates = device.get_templates_context()
self.assertEqual(templates, ["Template1", "Template2"])
def test_get_templates_context_with_string(self):
"""Test get_templates_context with a string instead of list."""
# Set up config_context with a string template
self.mock_nb_device.config_context = {"zabbix": {"templates": "Template1"}}
# Create device with the updated mock
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
# Test that template is wrapped in a list
templates = device.get_templates_context()
self.assertEqual(templates, ["Template1"])
def test_get_templates_context_no_zabbix_key(self):
"""Test get_templates_context when zabbix key is missing."""
# Set up config_context without zabbix key
self.mock_nb_device.config_context = {}
# Create device with the updated mock
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
# Test that TemplateError is raised
with self.assertRaises(TemplateError):
device.get_templates_context()
def test_get_templates_context_no_templates_key(self):
"""Test get_templates_context when templates key is missing."""
# Set up config_context without templates key
self.mock_nb_device.config_context = {"zabbix": {}}
# Create device with the updated mock
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
# Test that TemplateError is raised
with self.assertRaises(TemplateError):
device.get_templates_context()
def test_set_template_with_config_context(self):
"""Test set_template with templates_config_context=True."""
# Set up config_context with templates
self.mock_nb_device.config_context = {"zabbix": {"templates": ["Template1"]}}
# Mock get_templates_context to return expected templates
with patch.object(
PhysicalDevice, "get_templates_context", return_value=["Template1"]
):
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
# Call set_template with prefer_config_context=True
result = device.set_template(
prefer_config_context=True, overrule_custom=False
)
# Check result and template names
self.assertTrue(result)
self.assertEqual(device.zbx_template_names, ["Template1"])
def test_set_inventory_disabled_mode(self):
"""Test set_inventory with inventory_mode=disabled."""
# Configure with disabled inventory mode
config_patch = {
"device_cf": "zabbix_hostid",
"inventory_mode": "disabled",
"inventory_sync": False,
}
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config=config_patch,
)
result = device.set_inventory({})
# Check result
self.assertTrue(result)
# Default value for disabled inventory
self.assertEqual(device.inventory_mode, -1)
def test_set_inventory_manual_mode(self):
"""Test set_inventory with inventory_mode=manual."""
# Configure with manual inventory mode
config_patch = {
"device_cf": "zabbix_hostid",
"inventory_mode": "manual",
"inventory_sync": False,
}
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config=config_patch,
)
result = device.set_inventory({})
# Check result
self.assertTrue(result)
self.assertEqual(device.inventory_mode, 0) # Manual mode
def test_set_inventory_automatic_mode(self):
"""Test set_inventory with inventory_mode=automatic."""
# Configure with automatic inventory mode
config_patch = {
"device_cf": "zabbix_hostid",
"inventory_mode": "automatic",
"inventory_sync": False,
}
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config=config_patch,
)
result = device.set_inventory({})
# Check result
self.assertTrue(result)
self.assertEqual(device.inventory_mode, 1) # Automatic mode
def test_set_inventory_with_inventory_sync(self):
"""Test set_inventory with inventory_sync=True."""
# Configure with inventory sync enabled
config_patch = {
"device_cf": "zabbix_hostid",
"inventory_mode": "manual",
"inventory_sync": True,
"device_inventory_map": {"name": "name", "serial": "serialno_a"},
}
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config=config_patch,
)
# Create a mock device with the required attributes
mock_device_data = {"name": "test-device", "serial": "ABC123"}
result = device.set_inventory(mock_device_data)
# Check result
self.assertTrue(result)
self.assertEqual(device.inventory_mode, 0) # Manual mode
self.assertEqual(
device.inventory, {"name": "test-device", "serialno_a": "ABC123"}
)
def test_iscluster_true(self):
"""Test isCluster when device is part of a cluster."""
# Set up virtual_chassis
self.mock_nb_device.virtual_chassis = MagicMock()
# Create device with the updated mock
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
# Check isCluster result
self.assertTrue(device.is_cluster())
def test_is_cluster_false(self):
"""Test isCluster when device is not part of a cluster."""
# Set virtual_chassis to None
self.mock_nb_device.virtual_chassis = None
# Create device with the updated mock
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
config={"device_cf": "zabbix_hostid"},
)
# Check isCluster result
self.assertFalse(device.is_cluster())
def test_promote_master_device_primary(self):
"""Test promoteMasterDevice when device is primary in cluster."""
# Set up virtual chassis with master device
mock_vc = MagicMock()
mock_vc.name = "virtual-chassis-1"
mock_master = MagicMock()
mock_master.id = (
self.mock_nb_device.id
) # Set master ID to match the current device
mock_vc.master = mock_master
self.mock_nb_device.virtual_chassis = mock_vc
# Create device with the updated mock
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
)
# Call promoteMasterDevice and check the result
result = device.promote_primary_device()
# Should return True for primary device
self.assertTrue(result)
# Device name should be updated to virtual chassis name
self.assertEqual(device.name, "virtual-chassis-1")
def test_promote_master_device_secondary(self):
"""Test promoteMasterDevice when device is secondary in cluster."""
# Set up virtual chassis with a different master device
mock_vc = MagicMock()
mock_vc.name = "virtual-chassis-1"
mock_master = MagicMock()
mock_master.id = (
self.mock_nb_device.id + 1
) # Different ID than the current device
mock_vc.master = mock_master
self.mock_nb_device.virtual_chassis = mock_vc
# Create device with the updated mock
device = PhysicalDevice(
self.mock_nb_device,
self.mock_zabbix,
self.mock_nb_journal,
"3.0",
logger=self.mock_logger,
)
# Call promoteMasterDevice and check the result
result = device.promote_primary_device()
# Should return False for secondary device
self.assertFalse(result)
# Device name should not be modified
self.assertEqual(device.name, "test-device")
+284
View File
@@ -0,0 +1,284 @@
"""Tests for the ZabbixTags class in the tags module."""
import unittest
from unittest.mock import MagicMock
from netbox_zabbix_sync.modules.tags import ZabbixTags
class DummyNBForTags:
"""Minimal NetBox object that supports field_mapper's dict-style access."""
def __init__(self, name="test-host", config_context=None, tags=None, site=None):
self.name = name
self.config_context = config_context or {}
self.tags = tags or []
# Stored as a plain dict so field_mapper can traverse "site/name"
self.site = site if site is not None else {"name": "TestSite"}
def __getitem__(self, key):
return getattr(self, key)
class TestZabbixTagsInit(unittest.TestCase):
"""Tests for ZabbixTags initialisation."""
def test_sync_true_when_tag_sync_enabled(self):
"""sync flag should be True when tag_sync=True."""
nb = DummyNBForTags()
tags = ZabbixTags(nb, tag_map={}, tag_sync=True, logger=MagicMock())
self.assertTrue(tags.sync)
def test_sync_false_when_tag_sync_disabled(self):
"""sync flag should be False when tag_sync=False (default)."""
nb = DummyNBForTags()
tags = ZabbixTags(nb, tag_map={}, logger=MagicMock())
self.assertFalse(tags.sync)
def test_repr_and_str_return_host_name(self):
nb = DummyNBForTags(name="my-host")
tags = ZabbixTags(nb, tag_map={}, host="my-host", logger=MagicMock())
self.assertEqual(repr(tags), "my-host")
self.assertEqual(str(tags), "my-host")
class TestRenderTag(unittest.TestCase):
"""Tests for ZabbixTags.render_tag()."""
def setUp(self):
nb = DummyNBForTags()
self.logger = MagicMock()
self.tags = ZabbixTags(
nb, tag_map={}, tag_sync=True, tag_lower=True, logger=self.logger
)
def test_valid_tag_lowercased(self):
"""Valid name+value with tag_lower=True should produce lowercase keys."""
result = self.tags.render_tag("Site", "Production")
self.assertEqual(result, {"tag": "site", "value": "production"})
def test_valid_tag_not_lowercased(self):
"""tag_lower=False should preserve original case."""
nb = DummyNBForTags()
tags = ZabbixTags(
nb, tag_map={}, tag_sync=True, tag_lower=False, logger=self.logger
)
result = tags.render_tag("Site", "Production")
self.assertEqual(result, {"tag": "Site", "value": "Production"})
def test_invalid_name_none_returns_false(self):
"""None as tag name should return False."""
result = self.tags.render_tag(None, "somevalue")
self.assertFalse(result)
def test_invalid_name_too_long_returns_false(self):
"""Name exceeding 256 characters should return False."""
long_name = "x" * 257
result = self.tags.render_tag(long_name, "somevalue")
self.assertFalse(result)
def test_invalid_value_none_returns_false(self):
"""None as tag value should return False."""
result = self.tags.render_tag("site", None)
self.assertFalse(result)
def test_invalid_value_empty_string_returns_false(self):
"""Empty string as tag value should return False."""
result = self.tags.render_tag("site", "")
self.assertFalse(result)
def test_invalid_value_too_long_returns_false(self):
"""Value exceeding 256 characters should return False."""
long_value = "x" * 257
result = self.tags.render_tag("site", long_value)
self.assertFalse(result)
class TestGenerateFromTagMap(unittest.TestCase):
"""Tests for the field_mapper-driven tag generation path."""
def setUp(self):
self.logger = MagicMock()
def test_generate_tag_from_field_map(self):
"""Tags derived from tag_map fields are lowercased and returned correctly."""
nb = DummyNBForTags(name="router01")
# "site/name" → nb["site"]["name"] → "TestSite", mapped to tag name "site"
tag_map = {"site/name": "site"}
tags = ZabbixTags(
nb,
tag_map=tag_map,
tag_sync=True,
tag_lower=True,
logger=self.logger,
)
result = tags.generate()
self.assertEqual(len(result), 1)
self.assertEqual(result[0]["tag"], "site")
self.assertEqual(result[0]["value"], "testsite")
def test_generate_empty_field_map_produces_no_tags(self):
"""An empty tag_map with no context or NB tags should return an empty list."""
nb = DummyNBForTags()
tags = ZabbixTags(nb, tag_map={}, tag_sync=True, logger=self.logger)
result = tags.generate()
self.assertEqual(result, [])
def test_generate_deduplicates_tags(self):
"""Duplicate tags produced by the map should be deduplicated."""
# Two map entries that resolve to the same tag/value pair
nb = DummyNBForTags(name="router01")
tag_map = {"site/name": "site", "site/name": "site"} # noqa: F601
tags = ZabbixTags(
nb,
tag_map=tag_map,
tag_sync=True,
tag_lower=True,
logger=self.logger,
)
result = tags.generate()
self.assertEqual(len(result), 1)
class TestGenerateFromConfigContext(unittest.TestCase):
"""Tests for the config_context-driven tag generation path."""
def setUp(self):
self.logger = MagicMock()
def test_generates_tags_from_config_context(self):
"""Tags listed in config_context['zabbix']['tags'] are added correctly."""
nb = DummyNBForTags(
config_context={
"zabbix": {
"tags": [
{"environment": "production"},
{"location": "DC1"},
]
}
}
)
tags = ZabbixTags(
nb, tag_map={}, tag_sync=True, tag_lower=True, logger=self.logger
)
result = tags.generate()
self.assertEqual(len(result), 2)
tag_names = [t["tag"] for t in result]
self.assertIn("environment", tag_names)
self.assertIn("location", tag_names)
def test_skips_config_context_tags_with_invalid_values(self):
"""Config context tags with None value should be silently dropped."""
nb = DummyNBForTags(
config_context={
"zabbix": {
"tags": [
{"environment": None}, # invalid value
{"location": "DC1"},
]
}
}
)
tags = ZabbixTags(
nb, tag_map={}, tag_sync=True, tag_lower=True, logger=self.logger
)
result = tags.generate()
self.assertEqual(len(result), 1)
self.assertEqual(result[0]["tag"], "location")
def test_ignores_zabbix_tags_key_missing(self):
"""Missing 'tags' key inside config_context['zabbix'] produces no tags."""
nb = DummyNBForTags(config_context={"zabbix": {"templates": ["T1"]}})
tags = ZabbixTags(nb, tag_map={}, tag_sync=True, logger=self.logger)
result = tags.generate()
self.assertEqual(result, [])
def test_ignores_config_context_tags_not_a_list(self):
"""Non-list value for config_context['zabbix']['tags'] produces no tags."""
nb = DummyNBForTags(config_context={"zabbix": {"tags": "not-a-list"}})
tags = ZabbixTags(nb, tag_map={}, tag_sync=True, logger=self.logger)
result = tags.generate()
self.assertEqual(result, [])
class TestGenerateFromNetboxTags(unittest.TestCase):
"""Tests for the NetBox device tags forwarding path."""
def setUp(self):
self.logger = MagicMock()
# Simulate a list of NetBox tag objects (as dicts, matching real API shape)
self.nb_tags = [
{"name": "ping", "slug": "ping", "display": "ping"},
{"name": "snmp", "slug": "snmp", "display": "snmp"},
]
def test_generates_tags_from_netbox_tags_using_name(self):
"""NetBox device tags are forwarded using tag_name label and tag_value='name'."""
nb = DummyNBForTags(tags=self.nb_tags)
tags = ZabbixTags(
nb,
tag_map={},
tag_sync=True,
tag_lower=True,
tag_name="NetBox",
tag_value="name",
logger=self.logger,
)
result = tags.generate()
self.assertEqual(len(result), 2)
for t in result:
self.assertEqual(t["tag"], "netbox")
values = {t["value"] for t in result}
self.assertIn("ping", values)
self.assertIn("snmp", values)
def test_generates_tags_from_netbox_tags_using_slug(self):
"""tag_value='slug' should use the slug field from each NetBox tag."""
nb = DummyNBForTags(tags=self.nb_tags)
tags = ZabbixTags(
nb,
tag_map={},
tag_sync=True,
tag_lower=False,
tag_name="NetBox",
tag_value="slug",
logger=self.logger,
)
result = tags.generate()
values = {t["value"] for t in result}
self.assertIn("ping", values)
self.assertIn("snmp", values)
def test_generates_tags_from_netbox_tags_default_value_field(self):
"""When tag_value is not a recognised field name, falls back to 'name'."""
nb = DummyNBForTags(tags=self.nb_tags)
tags = ZabbixTags(
nb,
tag_map={},
tag_sync=True,
tag_lower=True,
tag_name="NetBox",
tag_value="invalid_field", # not display/name/slug → fall back to "name"
logger=self.logger,
)
result = tags.generate()
values = {t["value"] for t in result}
self.assertIn("ping", values)
def test_skips_netbox_tags_when_tag_name_not_set(self):
"""NetBox tag forwarding is skipped when tag_name is not configured."""
nb = DummyNBForTags(tags=self.nb_tags)
tags = ZabbixTags(
nb,
tag_map={},
tag_sync=True,
tag_lower=True,
tag_name=None,
logger=self.logger,
)
result = tags.generate()
self.assertEqual(result, [])
if __name__ == "__main__":
unittest.main()
+67
View File
@@ -0,0 +1,67 @@
from netbox_zabbix_sync.modules.tools import sanatize_log_output
def test_sanatize_log_output_secrets():
data = {
"macros": [
{"macro": "{$SECRET}", "type": "1", "value": "supersecret"},
{"macro": "{$PLAIN}", "type": "0", "value": "notsecret"},
]
}
sanitized = sanatize_log_output(data)
assert sanitized["macros"][0]["value"] == "********"
assert sanitized["macros"][1]["value"] == "notsecret"
def test_sanatize_log_output_interface_secrets():
data = {
"interfaceid": 123,
"details": {
"authpassphrase": "supersecret",
"privpassphrase": "anothersecret",
"securityname": "sensitiveuser",
"community": "public",
"other": "normalvalue",
},
}
sanitized = sanatize_log_output(data)
# Sensitive fields should be sanitized
assert sanitized["details"]["authpassphrase"] == "********"
assert sanitized["details"]["privpassphrase"] == "********"
assert sanitized["details"]["securityname"] == "********"
# Non-sensitive fields should remain
assert sanitized["details"]["community"] == "********"
assert sanitized["details"]["other"] == "normalvalue"
# interfaceid should be removed
assert "interfaceid" not in sanitized
def test_sanatize_log_output_interface_macros():
data = {
"interfaceid": 123,
"details": {
"authpassphrase": "{$SECRET_MACRO}",
"privpassphrase": "{$SECRET_MACRO}",
"securityname": "{$USER_MACRO}",
"community": "{$SNNMP_COMMUNITY}",
},
}
sanitized = sanatize_log_output(data)
# Macro values should not be sanitized
assert sanitized["details"]["authpassphrase"] == "{$SECRET_MACRO}"
assert sanitized["details"]["privpassphrase"] == "{$SECRET_MACRO}"
assert sanitized["details"]["securityname"] == "{$USER_MACRO}"
assert sanitized["details"]["community"] == "{$SNNMP_COMMUNITY}"
assert "interfaceid" not in sanitized
def test_sanatize_log_output_plain_data():
data = {"foo": "bar", "baz": 123}
sanitized = sanatize_log_output(data)
assert sanitized == data
def test_sanatize_log_output_non_dict():
data = [1, 2, 3]
sanitized = sanatize_log_output(data)
assert sanitized == data
+189
View File
@@ -0,0 +1,189 @@
import unittest
from unittest.mock import MagicMock, patch
from netbox_zabbix_sync.modules.device import PhysicalDevice
from netbox_zabbix_sync.modules.usermacros import ZabbixUsermacros
class DummyNB:
def __init__(self, name="dummy", config_context=None, **kwargs):
self.name = name
self.config_context = config_context or {}
for k, v in kwargs.items():
setattr(self, k, v)
def __getitem__(self, key):
return getattr(self, key)
class TestUsermacroSync(unittest.TestCase):
def setUp(self):
self.nb = DummyNB(serial="1234")
self.logger = MagicMock()
self.usermacro_map = {"serial": "{$HW_SERIAL}"}
def create_mock_device(self, config=None):
"""Helper method to create a properly mocked PhysicalDevice"""
# Mock the NetBox device with all required attributes
mock_nb = MagicMock()
mock_nb.id = 1
mock_nb.name = "dummy"
mock_nb.status.label = "Active"
mock_nb.tenant = None
mock_nb.config_context = {}
mock_nb.primary_ip.address = "192.168.1.1/24"
mock_nb.custom_fields = {"zabbix_hostid": None}
device_config = config if config is not None else {"device_cf": "zabbix_hostid"}
# Create device with proper initialization
device = PhysicalDevice(
nb=mock_nb,
zabbix=MagicMock(),
nb_journal_class=MagicMock(),
nb_version="3.0",
logger=self.logger,
config=device_config,
)
return device
@patch.object(PhysicalDevice, "_usermacro_map")
def test_usermacro_sync_false(self, mock_usermacro_map):
mock_usermacro_map.return_value = self.usermacro_map
device = self.create_mock_device(
config={
"usermacro_sync": False,
"device_cf": "zabbix_hostid",
"tag_sync": False,
}
)
# Call set_usermacros
result = device.set_usermacros()
self.assertEqual(device.usermacros, [])
self.assertTrue(result is True or result is None)
@patch("netbox_zabbix_sync.modules.device.ZabbixUsermacros")
@patch.object(PhysicalDevice, "_usermacro_map")
def test_usermacro_sync_true(self, mock_usermacro_map, mock_usermacros_class):
mock_usermacro_map.return_value = self.usermacro_map
# Mock the ZabbixUsermacros class to return some test data
mock_macros_instance = MagicMock()
mock_macros_instance.sync = True # This is important - sync must be True
mock_macros_instance.generate.return_value = [
{"macro": "{$HW_SERIAL}", "value": "1234"}
]
mock_usermacros_class.return_value = mock_macros_instance
device = self.create_mock_device(
config={
"usermacro_sync": True,
"device_cf": "zabbix_hostid",
"tag_sync": False,
}
)
# Call set_usermacros
device.set_usermacros()
self.assertIsInstance(device.usermacros, list)
self.assertGreater(len(device.usermacros), 0)
@patch("netbox_zabbix_sync.modules.device.ZabbixUsermacros")
@patch.object(PhysicalDevice, "_usermacro_map")
def test_usermacro_sync_full(self, mock_usermacro_map, mock_usermacros_class):
mock_usermacro_map.return_value = self.usermacro_map
# Mock the ZabbixUsermacros class to return some test data
mock_macros_instance = MagicMock()
mock_macros_instance.sync = True # This is important - sync must be True
mock_macros_instance.generate.return_value = [
{"macro": "{$HW_SERIAL}", "value": "1234"}
]
mock_usermacros_class.return_value = mock_macros_instance
device = self.create_mock_device(
config={
"usermacro_sync": "full",
"device_cf": "zabbix_hostid",
"tag_sync": False,
}
)
# Call set_usermacros
device.set_usermacros()
self.assertIsInstance(device.usermacros, list)
self.assertGreater(len(device.usermacros), 0)
class TestZabbixUsermacros(unittest.TestCase):
def setUp(self):
self.nb = DummyNB()
self.logger = MagicMock()
def test_validate_macro_valid(self):
macros = ZabbixUsermacros(self.nb, {}, False, logger=self.logger)
self.assertTrue(macros.validate_macro("{$TEST_MACRO}"))
self.assertTrue(macros.validate_macro("{$A1_2.3}"))
self.assertTrue(macros.validate_macro("{$FOO:bar}"))
def test_validate_macro_invalid(self):
macros = ZabbixUsermacros(self.nb, {}, False, logger=self.logger)
self.assertFalse(macros.validate_macro("$TEST_MACRO"))
self.assertFalse(macros.validate_macro("{TEST_MACRO}"))
self.assertFalse(macros.validate_macro("{$test}")) # lower-case not allowed
self.assertFalse(macros.validate_macro(""))
def test_render_macro_dict(self):
macros = ZabbixUsermacros(self.nb, {}, False, logger=self.logger)
macro = macros.render_macro(
"{$FOO}", {"value": "bar", "type": "secret", "description": "desc"}
)
self.assertEqual(macro["macro"], "{$FOO}")
self.assertEqual(macro["value"], "bar")
self.assertEqual(macro["type"], "1")
self.assertEqual(macro["description"], "desc")
def test_render_macro_dict_missing_value(self):
macros = ZabbixUsermacros(self.nb, {}, False, logger=self.logger)
result = macros.render_macro("{$FOO}", {"type": "text"})
self.assertFalse(result)
self.logger.info.assert_called()
def test_render_macro_str(self):
macros = ZabbixUsermacros(self.nb, {}, False, logger=self.logger)
macro = macros.render_macro("{$FOO}", "bar")
self.assertEqual(macro["macro"], "{$FOO}")
self.assertEqual(macro["value"], "bar")
self.assertEqual(macro["type"], "0")
self.assertEqual(macro["description"], "")
def test_render_macro_invalid_name(self):
macros = ZabbixUsermacros(self.nb, {}, False, logger=self.logger)
result = macros.render_macro("FOO", "bar")
self.assertFalse(result)
self.logger.warning.assert_called()
def test_generate_from_map(self):
nb = DummyNB(memory="bar", role="baz")
usermacro_map = {"memory": "{$FOO}", "role": "{$BAR}"}
macros = ZabbixUsermacros(nb, usermacro_map, True, logger=self.logger)
result = macros.generate()
self.assertEqual(len(result), 2)
self.assertEqual(result[0]["macro"], "{$FOO}")
self.assertEqual(result[1]["macro"], "{$BAR}")
def test_generate_from_config_context(self):
config_context = {"zabbix": {"usermacros": {"{$TEST_MACRO}": "test_value"}}}
nb = DummyNB(config_context=config_context)
macros = ZabbixUsermacros(nb, {}, True, logger=self.logger)
result = macros.generate()
self.assertEqual(len(result), 1)
self.assertEqual(result[0]["macro"], "{$TEST_MACRO}")
self.assertEqual(result[0]["value"], "test_value")
if __name__ == "__main__":
unittest.main()
Generated
+384
View File
@@ -0,0 +1,384 @@
version = 1
revision = 3
requires-python = ">=3.12"
[[package]]
name = "certifi"
version = "2026.1.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e0/2d/a891ca51311197f6ad14a7ef42e2399f36cf2f9bd44752b3dc4eab60fdc5/certifi-2026.1.4.tar.gz", hash = "sha256:ac726dd470482006e014ad384921ed6438c457018f4b3d204aea4281258b2120", size = 154268, upload-time = "2026-01-04T02:42:41.825Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e6/ad/3cc14f097111b4de0040c83a525973216457bbeeb63739ef1ed275c1c021/certifi-2026.1.4-py3-none-any.whl", hash = "sha256:9943707519e4add1115f44c2bc244f782c0249876bf51b6599fee1ffbedd685c", size = 152900, upload-time = "2026-01-04T02:42:40.15Z" },
]
[[package]]
name = "charset-normalizer"
version = "3.4.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/13/69/33ddede1939fdd074bce5434295f38fae7136463422fe4fd3e0e89b98062/charset_normalizer-3.4.4.tar.gz", hash = "sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a", size = 129418, upload-time = "2025-10-14T04:42:32.879Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f3/85/1637cd4af66fa687396e757dec650f28025f2a2f5a5531a3208dc0ec43f2/charset_normalizer-3.4.4-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0a98e6759f854bd25a58a73fa88833fba3b7c491169f86ce1180c948ab3fd394", size = 208425, upload-time = "2025-10-14T04:40:53.353Z" },
{ url = "https://files.pythonhosted.org/packages/9d/6a/04130023fef2a0d9c62d0bae2649b69f7b7d8d24ea5536feef50551029df/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b5b290ccc2a263e8d185130284f8501e3e36c5e02750fc6b6bdeb2e9e96f1e25", size = 148162, upload-time = "2025-10-14T04:40:54.558Z" },
{ url = "https://files.pythonhosted.org/packages/78/29/62328d79aa60da22c9e0b9a66539feae06ca0f5a4171ac4f7dc285b83688/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74bb723680f9f7a6234dcf67aea57e708ec1fbdf5699fb91dfd6f511b0a320ef", size = 144558, upload-time = "2025-10-14T04:40:55.677Z" },
{ url = "https://files.pythonhosted.org/packages/86/bb/b32194a4bf15b88403537c2e120b817c61cd4ecffa9b6876e941c3ee38fe/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f1e34719c6ed0b92f418c7c780480b26b5d9c50349e9a9af7d76bf757530350d", size = 161497, upload-time = "2025-10-14T04:40:57.217Z" },
{ url = "https://files.pythonhosted.org/packages/19/89/a54c82b253d5b9b111dc74aca196ba5ccfcca8242d0fb64146d4d3183ff1/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2437418e20515acec67d86e12bf70056a33abdacb5cb1655042f6538d6b085a8", size = 159240, upload-time = "2025-10-14T04:40:58.358Z" },
{ url = "https://files.pythonhosted.org/packages/c0/10/d20b513afe03acc89ec33948320a5544d31f21b05368436d580dec4e234d/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:11d694519d7f29d6cd09f6ac70028dba10f92f6cdd059096db198c283794ac86", size = 153471, upload-time = "2025-10-14T04:40:59.468Z" },
{ url = "https://files.pythonhosted.org/packages/61/fa/fbf177b55bdd727010f9c0a3c49eefa1d10f960e5f09d1d887bf93c2e698/charset_normalizer-3.4.4-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ac1c4a689edcc530fc9d9aa11f5774b9e2f33f9a0c6a57864e90908f5208d30a", size = 150864, upload-time = "2025-10-14T04:41:00.623Z" },
{ url = "https://files.pythonhosted.org/packages/05/12/9fbc6a4d39c0198adeebbde20b619790e9236557ca59fc40e0e3cebe6f40/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:21d142cc6c0ec30d2efee5068ca36c128a30b0f2c53c1c07bd78cb6bc1d3be5f", size = 150647, upload-time = "2025-10-14T04:41:01.754Z" },
{ url = "https://files.pythonhosted.org/packages/ad/1f/6a9a593d52e3e8c5d2b167daf8c6b968808efb57ef4c210acb907c365bc4/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:5dbe56a36425d26d6cfb40ce79c314a2e4dd6211d51d6d2191c00bed34f354cc", size = 145110, upload-time = "2025-10-14T04:41:03.231Z" },
{ url = "https://files.pythonhosted.org/packages/30/42/9a52c609e72471b0fc54386dc63c3781a387bb4fe61c20231a4ebcd58bdd/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5bfbb1b9acf3334612667b61bd3002196fe2a1eb4dd74d247e0f2a4d50ec9bbf", size = 162839, upload-time = "2025-10-14T04:41:04.715Z" },
{ url = "https://files.pythonhosted.org/packages/c4/5b/c0682bbf9f11597073052628ddd38344a3d673fda35a36773f7d19344b23/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:d055ec1e26e441f6187acf818b73564e6e6282709e9bcb5b63f5b23068356a15", size = 150667, upload-time = "2025-10-14T04:41:05.827Z" },
{ url = "https://files.pythonhosted.org/packages/e4/24/a41afeab6f990cf2daf6cb8c67419b63b48cf518e4f56022230840c9bfb2/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:af2d8c67d8e573d6de5bc30cdb27e9b95e49115cd9baad5ddbd1a6207aaa82a9", size = 160535, upload-time = "2025-10-14T04:41:06.938Z" },
{ url = "https://files.pythonhosted.org/packages/2a/e5/6a4ce77ed243c4a50a1fecca6aaaab419628c818a49434be428fe24c9957/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:780236ac706e66881f3b7f2f32dfe90507a09e67d1d454c762cf642e6e1586e0", size = 154816, upload-time = "2025-10-14T04:41:08.101Z" },
{ url = "https://files.pythonhosted.org/packages/a8/ef/89297262b8092b312d29cdb2517cb1237e51db8ecef2e9af5edbe7b683b1/charset_normalizer-3.4.4-cp312-cp312-win32.whl", hash = "sha256:5833d2c39d8896e4e19b689ffc198f08ea58116bee26dea51e362ecc7cd3ed26", size = 99694, upload-time = "2025-10-14T04:41:09.23Z" },
{ url = "https://files.pythonhosted.org/packages/3d/2d/1e5ed9dd3b3803994c155cd9aacb60c82c331bad84daf75bcb9c91b3295e/charset_normalizer-3.4.4-cp312-cp312-win_amd64.whl", hash = "sha256:a79cfe37875f822425b89a82333404539ae63dbdddf97f84dcbc3d339aae9525", size = 107131, upload-time = "2025-10-14T04:41:10.467Z" },
{ url = "https://files.pythonhosted.org/packages/d0/d9/0ed4c7098a861482a7b6a95603edce4c0d9db2311af23da1fb2b75ec26fc/charset_normalizer-3.4.4-cp312-cp312-win_arm64.whl", hash = "sha256:376bec83a63b8021bb5c8ea75e21c4ccb86e7e45ca4eb81146091b56599b80c3", size = 100390, upload-time = "2025-10-14T04:41:11.915Z" },
{ url = "https://files.pythonhosted.org/packages/97/45/4b3a1239bbacd321068ea6e7ac28875b03ab8bc0aa0966452db17cd36714/charset_normalizer-3.4.4-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e1f185f86a6f3403aa2420e815904c67b2f9ebc443f045edd0de921108345794", size = 208091, upload-time = "2025-10-14T04:41:13.346Z" },
{ url = "https://files.pythonhosted.org/packages/7d/62/73a6d7450829655a35bb88a88fca7d736f9882a27eacdca2c6d505b57e2e/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b39f987ae8ccdf0d2642338faf2abb1862340facc796048b604ef14919e55ed", size = 147936, upload-time = "2025-10-14T04:41:14.461Z" },
{ url = "https://files.pythonhosted.org/packages/89/c5/adb8c8b3d6625bef6d88b251bbb0d95f8205831b987631ab0c8bb5d937c2/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3162d5d8ce1bb98dd51af660f2121c55d0fa541b46dff7bb9b9f86ea1d87de72", size = 144180, upload-time = "2025-10-14T04:41:15.588Z" },
{ url = "https://files.pythonhosted.org/packages/91/ed/9706e4070682d1cc219050b6048bfd293ccf67b3d4f5a4f39207453d4b99/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:81d5eb2a312700f4ecaa977a8235b634ce853200e828fbadf3a9c50bab278328", size = 161346, upload-time = "2025-10-14T04:41:16.738Z" },
{ url = "https://files.pythonhosted.org/packages/d5/0d/031f0d95e4972901a2f6f09ef055751805ff541511dc1252ba3ca1f80cf5/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5bd2293095d766545ec1a8f612559f6b40abc0eb18bb2f5d1171872d34036ede", size = 158874, upload-time = "2025-10-14T04:41:17.923Z" },
{ url = "https://files.pythonhosted.org/packages/f5/83/6ab5883f57c9c801ce5e5677242328aa45592be8a00644310a008d04f922/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a8a8b89589086a25749f471e6a900d3f662d1d3b6e2e59dcecf787b1cc3a1894", size = 153076, upload-time = "2025-10-14T04:41:19.106Z" },
{ url = "https://files.pythonhosted.org/packages/75/1e/5ff781ddf5260e387d6419959ee89ef13878229732732ee73cdae01800f2/charset_normalizer-3.4.4-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc7637e2f80d8530ee4a78e878bce464f70087ce73cf7c1caf142416923b98f1", size = 150601, upload-time = "2025-10-14T04:41:20.245Z" },
{ url = "https://files.pythonhosted.org/packages/d7/57/71be810965493d3510a6ca79b90c19e48696fb1ff964da319334b12677f0/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f8bf04158c6b607d747e93949aa60618b61312fe647a6369f88ce2ff16043490", size = 150376, upload-time = "2025-10-14T04:41:21.398Z" },
{ url = "https://files.pythonhosted.org/packages/e5/d5/c3d057a78c181d007014feb7e9f2e65905a6c4ef182c0ddf0de2924edd65/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:554af85e960429cf30784dd47447d5125aaa3b99a6f0683589dbd27e2f45da44", size = 144825, upload-time = "2025-10-14T04:41:22.583Z" },
{ url = "https://files.pythonhosted.org/packages/e6/8c/d0406294828d4976f275ffbe66f00266c4b3136b7506941d87c00cab5272/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:74018750915ee7ad843a774364e13a3db91682f26142baddf775342c3f5b1133", size = 162583, upload-time = "2025-10-14T04:41:23.754Z" },
{ url = "https://files.pythonhosted.org/packages/d7/24/e2aa1f18c8f15c4c0e932d9287b8609dd30ad56dbe41d926bd846e22fb8d/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:c0463276121fdee9c49b98908b3a89c39be45d86d1dbaa22957e38f6321d4ce3", size = 150366, upload-time = "2025-10-14T04:41:25.27Z" },
{ url = "https://files.pythonhosted.org/packages/e4/5b/1e6160c7739aad1e2df054300cc618b06bf784a7a164b0f238360721ab86/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:362d61fd13843997c1c446760ef36f240cf81d3ebf74ac62652aebaf7838561e", size = 160300, upload-time = "2025-10-14T04:41:26.725Z" },
{ url = "https://files.pythonhosted.org/packages/7a/10/f882167cd207fbdd743e55534d5d9620e095089d176d55cb22d5322f2afd/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9a26f18905b8dd5d685d6d07b0cdf98a79f3c7a918906af7cc143ea2e164c8bc", size = 154465, upload-time = "2025-10-14T04:41:28.322Z" },
{ url = "https://files.pythonhosted.org/packages/89/66/c7a9e1b7429be72123441bfdbaf2bc13faab3f90b933f664db506dea5915/charset_normalizer-3.4.4-cp313-cp313-win32.whl", hash = "sha256:9b35f4c90079ff2e2edc5b26c0c77925e5d2d255c42c74fdb70fb49b172726ac", size = 99404, upload-time = "2025-10-14T04:41:29.95Z" },
{ url = "https://files.pythonhosted.org/packages/c4/26/b9924fa27db384bdcd97ab83b4f0a8058d96ad9626ead570674d5e737d90/charset_normalizer-3.4.4-cp313-cp313-win_amd64.whl", hash = "sha256:b435cba5f4f750aa6c0a0d92c541fb79f69a387c91e61f1795227e4ed9cece14", size = 107092, upload-time = "2025-10-14T04:41:31.188Z" },
{ url = "https://files.pythonhosted.org/packages/af/8f/3ed4bfa0c0c72a7ca17f0380cd9e4dd842b09f664e780c13cff1dcf2ef1b/charset_normalizer-3.4.4-cp313-cp313-win_arm64.whl", hash = "sha256:542d2cee80be6f80247095cc36c418f7bddd14f4a6de45af91dfad36d817bba2", size = 100408, upload-time = "2025-10-14T04:41:32.624Z" },
{ url = "https://files.pythonhosted.org/packages/2a/35/7051599bd493e62411d6ede36fd5af83a38f37c4767b92884df7301db25d/charset_normalizer-3.4.4-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:da3326d9e65ef63a817ecbcc0df6e94463713b754fe293eaa03da99befb9a5bd", size = 207746, upload-time = "2025-10-14T04:41:33.773Z" },
{ url = "https://files.pythonhosted.org/packages/10/9a/97c8d48ef10d6cd4fcead2415523221624bf58bcf68a802721a6bc807c8f/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8af65f14dc14a79b924524b1e7fffe304517b2bff5a58bf64f30b98bbc5079eb", size = 147889, upload-time = "2025-10-14T04:41:34.897Z" },
{ url = "https://files.pythonhosted.org/packages/10/bf/979224a919a1b606c82bd2c5fa49b5c6d5727aa47b4312bb27b1734f53cd/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74664978bb272435107de04e36db5a9735e78232b85b77d45cfb38f758efd33e", size = 143641, upload-time = "2025-10-14T04:41:36.116Z" },
{ url = "https://files.pythonhosted.org/packages/ba/33/0ad65587441fc730dc7bd90e9716b30b4702dc7b617e6ba4997dc8651495/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:752944c7ffbfdd10c074dc58ec2d5a8a4cd9493b314d367c14d24c17684ddd14", size = 160779, upload-time = "2025-10-14T04:41:37.229Z" },
{ url = "https://files.pythonhosted.org/packages/67/ed/331d6b249259ee71ddea93f6f2f0a56cfebd46938bde6fcc6f7b9a3d0e09/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d1f13550535ad8cff21b8d757a3257963e951d96e20ec82ab44bc64aeb62a191", size = 159035, upload-time = "2025-10-14T04:41:38.368Z" },
{ url = "https://files.pythonhosted.org/packages/67/ff/f6b948ca32e4f2a4576aa129d8bed61f2e0543bf9f5f2b7fc3758ed005c9/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ecaae4149d99b1c9e7b88bb03e3221956f68fd6d50be2ef061b2381b61d20838", size = 152542, upload-time = "2025-10-14T04:41:39.862Z" },
{ url = "https://files.pythonhosted.org/packages/16/85/276033dcbcc369eb176594de22728541a925b2632f9716428c851b149e83/charset_normalizer-3.4.4-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:cb6254dc36b47a990e59e1068afacdcd02958bdcce30bb50cc1700a8b9d624a6", size = 149524, upload-time = "2025-10-14T04:41:41.319Z" },
{ url = "https://files.pythonhosted.org/packages/9e/f2/6a2a1f722b6aba37050e626530a46a68f74e63683947a8acff92569f979a/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c8ae8a0f02f57a6e61203a31428fa1d677cbe50c93622b4149d5c0f319c1d19e", size = 150395, upload-time = "2025-10-14T04:41:42.539Z" },
{ url = "https://files.pythonhosted.org/packages/60/bb/2186cb2f2bbaea6338cad15ce23a67f9b0672929744381e28b0592676824/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:47cc91b2f4dd2833fddaedd2893006b0106129d4b94fdb6af1f4ce5a9965577c", size = 143680, upload-time = "2025-10-14T04:41:43.661Z" },
{ url = "https://files.pythonhosted.org/packages/7d/a5/bf6f13b772fbb2a90360eb620d52ed8f796f3c5caee8398c3b2eb7b1c60d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:82004af6c302b5d3ab2cfc4cc5f29db16123b1a8417f2e25f9066f91d4411090", size = 162045, upload-time = "2025-10-14T04:41:44.821Z" },
{ url = "https://files.pythonhosted.org/packages/df/c5/d1be898bf0dc3ef9030c3825e5d3b83f2c528d207d246cbabe245966808d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:2b7d8f6c26245217bd2ad053761201e9f9680f8ce52f0fcd8d0755aeae5b2152", size = 149687, upload-time = "2025-10-14T04:41:46.442Z" },
{ url = "https://files.pythonhosted.org/packages/a5/42/90c1f7b9341eef50c8a1cb3f098ac43b0508413f33affd762855f67a410e/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:799a7a5e4fb2d5898c60b640fd4981d6a25f1c11790935a44ce38c54e985f828", size = 160014, upload-time = "2025-10-14T04:41:47.631Z" },
{ url = "https://files.pythonhosted.org/packages/76/be/4d3ee471e8145d12795ab655ece37baed0929462a86e72372fd25859047c/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:99ae2cffebb06e6c22bdc25801d7b30f503cc87dbd283479e7b606f70aff57ec", size = 154044, upload-time = "2025-10-14T04:41:48.81Z" },
{ url = "https://files.pythonhosted.org/packages/b0/6f/8f7af07237c34a1defe7defc565a9bc1807762f672c0fde711a4b22bf9c0/charset_normalizer-3.4.4-cp314-cp314-win32.whl", hash = "sha256:f9d332f8c2a2fcbffe1378594431458ddbef721c1769d78e2cbc06280d8155f9", size = 99940, upload-time = "2025-10-14T04:41:49.946Z" },
{ url = "https://files.pythonhosted.org/packages/4b/51/8ade005e5ca5b0d80fb4aff72a3775b325bdc3d27408c8113811a7cbe640/charset_normalizer-3.4.4-cp314-cp314-win_amd64.whl", hash = "sha256:8a6562c3700cce886c5be75ade4a5db4214fda19fede41d9792d100288d8f94c", size = 107104, upload-time = "2025-10-14T04:41:51.051Z" },
{ url = "https://files.pythonhosted.org/packages/da/5f/6b8f83a55bb8278772c5ae54a577f3099025f9ade59d0136ac24a0df4bde/charset_normalizer-3.4.4-cp314-cp314-win_arm64.whl", hash = "sha256:de00632ca48df9daf77a2c65a484531649261ec9f25489917f09e455cb09ddb2", size = 100743, upload-time = "2025-10-14T04:41:52.122Z" },
{ url = "https://files.pythonhosted.org/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" },
]
[[package]]
name = "colorama"
version = "0.4.6"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
]
[[package]]
name = "coverage"
version = "7.13.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ad/49/349848445b0e53660e258acbcc9b0d014895b6739237920886672240f84b/coverage-7.13.2.tar.gz", hash = "sha256:044c6951ec37146b72a50cc81ef02217d27d4c3640efd2640311393cbbf143d3", size = 826523, upload-time = "2026-01-25T13:00:04.889Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/46/39/e92a35f7800222d3f7b2cbb7bbc3b65672ae8d501cb31801b2d2bd7acdf1/coverage-7.13.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f106b2af193f965d0d3234f3f83fc35278c7fb935dfbde56ae2da3dd2c03b84d", size = 219142, upload-time = "2026-01-25T12:58:00.448Z" },
{ url = "https://files.pythonhosted.org/packages/45/7a/8bf9e9309c4c996e65c52a7c5a112707ecdd9fbaf49e10b5a705a402bbb4/coverage-7.13.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:78f45d21dc4d5d6bd29323f0320089ef7eae16e4bef712dff79d184fa7330af3", size = 219503, upload-time = "2026-01-25T12:58:02.451Z" },
{ url = "https://files.pythonhosted.org/packages/87/93/17661e06b7b37580923f3f12406ac91d78aeed293fb6da0b69cc7957582f/coverage-7.13.2-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:fae91dfecd816444c74531a9c3d6ded17a504767e97aa674d44f638107265b99", size = 251006, upload-time = "2026-01-25T12:58:04.059Z" },
{ url = "https://files.pythonhosted.org/packages/12/f0/f9e59fb8c310171497f379e25db060abef9fa605e09d63157eebec102676/coverage-7.13.2-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:264657171406c114787b441484de620e03d8f7202f113d62fcd3d9688baa3e6f", size = 253750, upload-time = "2026-01-25T12:58:05.574Z" },
{ url = "https://files.pythonhosted.org/packages/e5/b1/1935e31add2232663cf7edd8269548b122a7d100047ff93475dbaaae673e/coverage-7.13.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ae47d8dcd3ded0155afbb59c62bd8ab07ea0fd4902e1c40567439e6db9dcaf2f", size = 254862, upload-time = "2026-01-25T12:58:07.647Z" },
{ url = "https://files.pythonhosted.org/packages/af/59/b5e97071ec13df5f45da2b3391b6cdbec78ba20757bc92580a5b3d5fa53c/coverage-7.13.2-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:8a0b33e9fd838220b007ce8f299114d406c1e8edb21336af4c97a26ecfd185aa", size = 251420, upload-time = "2026-01-25T12:58:09.309Z" },
{ url = "https://files.pythonhosted.org/packages/3f/75/9495932f87469d013dc515fb0ce1aac5fa97766f38f6b1a1deb1ee7b7f3a/coverage-7.13.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b3becbea7f3ce9a2d4d430f223ec15888e4deb31395840a79e916368d6004cce", size = 252786, upload-time = "2026-01-25T12:58:10.909Z" },
{ url = "https://files.pythonhosted.org/packages/6a/59/af550721f0eb62f46f7b8cb7e6f1860592189267b1c411a4e3a057caacee/coverage-7.13.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:f819c727a6e6eeb8711e4ce63d78c620f69630a2e9d53bc95ca5379f57b6ba94", size = 250928, upload-time = "2026-01-25T12:58:12.449Z" },
{ url = "https://files.pythonhosted.org/packages/9b/b1/21b4445709aae500be4ab43bbcfb4e53dc0811c3396dcb11bf9f23fd0226/coverage-7.13.2-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:4f7b71757a3ab19f7ba286e04c181004c1d61be921795ee8ba6970fd0ec91da5", size = 250496, upload-time = "2026-01-25T12:58:14.047Z" },
{ url = "https://files.pythonhosted.org/packages/ba/b1/0f5d89dfe0392990e4f3980adbde3eb34885bc1effb2dc369e0bf385e389/coverage-7.13.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b7fc50d2afd2e6b4f6f2f403b70103d280a8e0cb35320cbbe6debcda02a1030b", size = 252373, upload-time = "2026-01-25T12:58:15.976Z" },
{ url = "https://files.pythonhosted.org/packages/01/c9/0cf1a6a57a9968cc049a6b896693faa523c638a5314b1fc374eb2b2ac904/coverage-7.13.2-cp312-cp312-win32.whl", hash = "sha256:292250282cf9bcf206b543d7608bda17ca6fc151f4cbae949fc7e115112fbd41", size = 221696, upload-time = "2026-01-25T12:58:17.517Z" },
{ url = "https://files.pythonhosted.org/packages/4d/05/d7540bf983f09d32803911afed135524570f8c47bb394bf6206c1dc3a786/coverage-7.13.2-cp312-cp312-win_amd64.whl", hash = "sha256:eeea10169fac01549a7921d27a3e517194ae254b542102267bef7a93ed38c40e", size = 222504, upload-time = "2026-01-25T12:58:19.115Z" },
{ url = "https://files.pythonhosted.org/packages/15/8b/1a9f037a736ced0a12aacf6330cdaad5008081142a7070bc58b0f7930cbc/coverage-7.13.2-cp312-cp312-win_arm64.whl", hash = "sha256:2a5b567f0b635b592c917f96b9a9cb3dbd4c320d03f4bf94e9084e494f2e8894", size = 221120, upload-time = "2026-01-25T12:58:21.334Z" },
{ url = "https://files.pythonhosted.org/packages/a7/f0/3d3eac7568ab6096ff23791a526b0048a1ff3f49d0e236b2af6fb6558e88/coverage-7.13.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ed75de7d1217cf3b99365d110975f83af0528c849ef5180a12fd91b5064df9d6", size = 219168, upload-time = "2026-01-25T12:58:23.376Z" },
{ url = "https://files.pythonhosted.org/packages/a3/a6/f8b5cfeddbab95fdef4dcd682d82e5dcff7a112ced57a959f89537ee9995/coverage-7.13.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:97e596de8fa9bada4d88fde64a3f4d37f1b6131e4faa32bad7808abc79887ddc", size = 219537, upload-time = "2026-01-25T12:58:24.932Z" },
{ url = "https://files.pythonhosted.org/packages/7b/e6/8d8e6e0c516c838229d1e41cadcec91745f4b1031d4db17ce0043a0423b4/coverage-7.13.2-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:68c86173562ed4413345410c9480a8d64864ac5e54a5cda236748031e094229f", size = 250528, upload-time = "2026-01-25T12:58:26.567Z" },
{ url = "https://files.pythonhosted.org/packages/8e/78/befa6640f74092b86961f957f26504c8fba3d7da57cc2ab7407391870495/coverage-7.13.2-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7be4d613638d678b2b3773b8f687537b284d7074695a43fe2fbbfc0e31ceaed1", size = 253132, upload-time = "2026-01-25T12:58:28.251Z" },
{ url = "https://files.pythonhosted.org/packages/9d/10/1630db1edd8ce675124a2ee0f7becc603d2bb7b345c2387b4b95c6907094/coverage-7.13.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d7f63ce526a96acd0e16c4af8b50b64334239550402fb1607ce6a584a6d62ce9", size = 254374, upload-time = "2026-01-25T12:58:30.294Z" },
{ url = "https://files.pythonhosted.org/packages/ed/1d/0d9381647b1e8e6d310ac4140be9c428a0277330991e0c35bdd751e338a4/coverage-7.13.2-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:406821f37f864f968e29ac14c3fccae0fec9fdeba48327f0341decf4daf92d7c", size = 250762, upload-time = "2026-01-25T12:58:32.036Z" },
{ url = "https://files.pythonhosted.org/packages/43/e4/5636dfc9a7c871ee8776af83ee33b4c26bc508ad6cee1e89b6419a366582/coverage-7.13.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ee68e5a4e3e5443623406b905db447dceddffee0dceb39f4e0cd9ec2a35004b5", size = 252502, upload-time = "2026-01-25T12:58:33.961Z" },
{ url = "https://files.pythonhosted.org/packages/02/2a/7ff2884d79d420cbb2d12fed6fff727b6d0ef27253140d3cdbbd03187ee0/coverage-7.13.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:2ee0e58cca0c17dd9c6c1cdde02bb705c7b3fbfa5f3b0b5afeda20d4ebff8ef4", size = 250463, upload-time = "2026-01-25T12:58:35.529Z" },
{ url = "https://files.pythonhosted.org/packages/91/c0/ba51087db645b6c7261570400fc62c89a16278763f36ba618dc8657a187b/coverage-7.13.2-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:6e5bbb5018bf76a56aabdb64246b5288d5ae1b7d0dd4d0534fe86df2c2992d1c", size = 250288, upload-time = "2026-01-25T12:58:37.226Z" },
{ url = "https://files.pythonhosted.org/packages/03/07/44e6f428551c4d9faf63ebcefe49b30e5c89d1be96f6a3abd86a52da9d15/coverage-7.13.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a55516c68ef3e08e134e818d5e308ffa6b1337cc8b092b69b24287bf07d38e31", size = 252063, upload-time = "2026-01-25T12:58:38.821Z" },
{ url = "https://files.pythonhosted.org/packages/c2/67/35b730ad7e1859dd57e834d1bc06080d22d2f87457d53f692fce3f24a5a9/coverage-7.13.2-cp313-cp313-win32.whl", hash = "sha256:5b20211c47a8abf4abc3319d8ce2464864fa9f30c5fcaf958a3eed92f4f1fef8", size = 221716, upload-time = "2026-01-25T12:58:40.484Z" },
{ url = "https://files.pythonhosted.org/packages/0d/82/e5fcf5a97c72f45fc14829237a6550bf49d0ab882ac90e04b12a69db76b4/coverage-7.13.2-cp313-cp313-win_amd64.whl", hash = "sha256:14f500232e521201cf031549fb1ebdfc0a40f401cf519157f76c397e586c3beb", size = 222522, upload-time = "2026-01-25T12:58:43.247Z" },
{ url = "https://files.pythonhosted.org/packages/b1/f1/25d7b2f946d239dd2d6644ca2cc060d24f97551e2af13b6c24c722ae5f97/coverage-7.13.2-cp313-cp313-win_arm64.whl", hash = "sha256:9779310cb5a9778a60c899f075a8514c89fa6d10131445c2207fc893e0b14557", size = 221145, upload-time = "2026-01-25T12:58:45Z" },
{ url = "https://files.pythonhosted.org/packages/9e/f7/080376c029c8f76fadfe43911d0daffa0cbdc9f9418a0eead70c56fb7f4b/coverage-7.13.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:e64fa5a1e41ce5df6b547cbc3d3699381c9e2c2c369c67837e716ed0f549d48e", size = 219861, upload-time = "2026-01-25T12:58:46.586Z" },
{ url = "https://files.pythonhosted.org/packages/42/11/0b5e315af5ab35f4c4a70e64d3314e4eec25eefc6dec13be3a7d5ffe8ac5/coverage-7.13.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:b01899e82a04085b6561eb233fd688474f57455e8ad35cd82286463ba06332b7", size = 220207, upload-time = "2026-01-25T12:58:48.277Z" },
{ url = "https://files.pythonhosted.org/packages/b2/0c/0874d0318fb1062117acbef06a09cf8b63f3060c22265adaad24b36306b7/coverage-7.13.2-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:838943bea48be0e2768b0cf7819544cdedc1bbb2f28427eabb6eb8c9eb2285d3", size = 261504, upload-time = "2026-01-25T12:58:49.904Z" },
{ url = "https://files.pythonhosted.org/packages/83/5e/1cd72c22ecb30751e43a72f40ba50fcef1b7e93e3ea823bd9feda8e51f9a/coverage-7.13.2-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:93d1d25ec2b27e90bcfef7012992d1f5121b51161b8bffcda756a816cf13c2c3", size = 263582, upload-time = "2026-01-25T12:58:51.582Z" },
{ url = "https://files.pythonhosted.org/packages/9b/da/8acf356707c7a42df4d0657020308e23e5a07397e81492640c186268497c/coverage-7.13.2-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:93b57142f9621b0d12349c43fc7741fe578e4bc914c1e5a54142856cfc0bf421", size = 266008, upload-time = "2026-01-25T12:58:53.234Z" },
{ url = "https://files.pythonhosted.org/packages/41/41/ea1730af99960309423c6ea8d6a4f1fa5564b2d97bd1d29dda4b42611f04/coverage-7.13.2-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f06799ae1bdfff7ccb8665d75f8291c69110ba9585253de254688aa8a1ccc6c5", size = 260762, upload-time = "2026-01-25T12:58:55.372Z" },
{ url = "https://files.pythonhosted.org/packages/22/fa/02884d2080ba71db64fdc127b311db60e01fe6ba797d9c8363725e39f4d5/coverage-7.13.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:7f9405ab4f81d490811b1d91c7a20361135a2df4c170e7f0b747a794da5b7f23", size = 263571, upload-time = "2026-01-25T12:58:57.52Z" },
{ url = "https://files.pythonhosted.org/packages/d2/6b/4083aaaeba9b3112f55ac57c2ce7001dc4d8fa3fcc228a39f09cc84ede27/coverage-7.13.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:f9ab1d5b86f8fbc97a5b3cd6280a3fd85fef3b028689d8a2c00918f0d82c728c", size = 261200, upload-time = "2026-01-25T12:58:59.255Z" },
{ url = "https://files.pythonhosted.org/packages/e9/d2/aea92fa36d61955e8c416ede9cf9bf142aa196f3aea214bb67f85235a050/coverage-7.13.2-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:f674f59712d67e841525b99e5e2b595250e39b529c3bda14764e4f625a3fa01f", size = 260095, upload-time = "2026-01-25T12:59:01.066Z" },
{ url = "https://files.pythonhosted.org/packages/0d/ae/04ffe96a80f107ea21b22b2367175c621da920063260a1c22f9452fd7866/coverage-7.13.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c6cadac7b8ace1ba9144feb1ae3cb787a6065ba6d23ffc59a934b16406c26573", size = 262284, upload-time = "2026-01-25T12:59:02.802Z" },
{ url = "https://files.pythonhosted.org/packages/1c/7a/6f354dcd7dfc41297791d6fb4e0d618acb55810bde2c1fd14b3939e05c2b/coverage-7.13.2-cp313-cp313t-win32.whl", hash = "sha256:14ae4146465f8e6e6253eba0cccd57423e598a4cb925958b240c805300918343", size = 222389, upload-time = "2026-01-25T12:59:04.563Z" },
{ url = "https://files.pythonhosted.org/packages/8d/d5/080ad292a4a3d3daf411574be0a1f56d6dee2c4fdf6b005342be9fac807f/coverage-7.13.2-cp313-cp313t-win_amd64.whl", hash = "sha256:9074896edd705a05769e3de0eac0a8388484b503b68863dd06d5e473f874fd47", size = 223450, upload-time = "2026-01-25T12:59:06.677Z" },
{ url = "https://files.pythonhosted.org/packages/88/96/df576fbacc522e9fb8d1c4b7a7fc62eb734be56e2cba1d88d2eabe08ea3f/coverage-7.13.2-cp313-cp313t-win_arm64.whl", hash = "sha256:69e526e14f3f854eda573d3cf40cffd29a1a91c684743d904c33dbdcd0e0f3e7", size = 221707, upload-time = "2026-01-25T12:59:08.363Z" },
{ url = "https://files.pythonhosted.org/packages/55/53/1da9e51a0775634b04fcc11eb25c002fc58ee4f92ce2e8512f94ac5fc5bf/coverage-7.13.2-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:387a825f43d680e7310e6f325b2167dd093bc8ffd933b83e9aa0983cf6e0a2ef", size = 219213, upload-time = "2026-01-25T12:59:11.909Z" },
{ url = "https://files.pythonhosted.org/packages/46/35/b3caac3ebbd10230fea5a33012b27d19e999a17c9285c4228b4b2e35b7da/coverage-7.13.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:f0d7fea9d8e5d778cd5a9e8fc38308ad688f02040e883cdc13311ef2748cb40f", size = 219549, upload-time = "2026-01-25T12:59:13.638Z" },
{ url = "https://files.pythonhosted.org/packages/76/9c/e1cf7def1bdc72c1907e60703983a588f9558434a2ff94615747bd73c192/coverage-7.13.2-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:e080afb413be106c95c4ee96b4fffdc9e2fa56a8bbf90b5c0918e5c4449412f5", size = 250586, upload-time = "2026-01-25T12:59:15.808Z" },
{ url = "https://files.pythonhosted.org/packages/ba/49/f54ec02ed12be66c8d8897270505759e057b0c68564a65c429ccdd1f139e/coverage-7.13.2-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:a7fc042ba3c7ce25b8a9f097eb0f32a5ce1ccdb639d9eec114e26def98e1f8a4", size = 253093, upload-time = "2026-01-25T12:59:17.491Z" },
{ url = "https://files.pythonhosted.org/packages/fb/5e/aaf86be3e181d907e23c0f61fccaeb38de8e6f6b47aed92bf57d8fc9c034/coverage-7.13.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d0ba505e021557f7f8173ee8cd6b926373d8653e5ff7581ae2efce1b11ef4c27", size = 254446, upload-time = "2026-01-25T12:59:19.752Z" },
{ url = "https://files.pythonhosted.org/packages/28/c8/a5fa01460e2d75b0c853b392080d6829d3ca8b5ab31e158fa0501bc7c708/coverage-7.13.2-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:7de326f80e3451bd5cc7239ab46c73ddb658fe0b7649476bc7413572d36cd548", size = 250615, upload-time = "2026-01-25T12:59:21.928Z" },
{ url = "https://files.pythonhosted.org/packages/86/0b/6d56315a55f7062bb66410732c24879ccb2ec527ab6630246de5fe45a1df/coverage-7.13.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:abaea04f1e7e34841d4a7b343904a3f59481f62f9df39e2cd399d69a187a9660", size = 252452, upload-time = "2026-01-25T12:59:23.592Z" },
{ url = "https://files.pythonhosted.org/packages/30/19/9bc550363ebc6b0ea121977ee44d05ecd1e8bf79018b8444f1028701c563/coverage-7.13.2-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:9f93959ee0c604bccd8e0697be21de0887b1f73efcc3aa73a3ec0fd13feace92", size = 250418, upload-time = "2026-01-25T12:59:25.392Z" },
{ url = "https://files.pythonhosted.org/packages/1f/53/580530a31ca2f0cc6f07a8f2ab5460785b02bb11bdf815d4c4d37a4c5169/coverage-7.13.2-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:13fe81ead04e34e105bf1b3c9f9cdf32ce31736ee5d90a8d2de02b9d3e1bcb82", size = 250231, upload-time = "2026-01-25T12:59:27.888Z" },
{ url = "https://files.pythonhosted.org/packages/e2/42/dd9093f919dc3088cb472893651884bd675e3df3d38a43f9053656dca9a2/coverage-7.13.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d6d16b0f71120e365741bca2cb473ca6fe38930bc5431c5e850ba949f708f892", size = 251888, upload-time = "2026-01-25T12:59:29.636Z" },
{ url = "https://files.pythonhosted.org/packages/fa/a6/0af4053e6e819774626e133c3d6f70fae4d44884bfc4b126cb647baee8d3/coverage-7.13.2-cp314-cp314-win32.whl", hash = "sha256:9b2f4714bb7d99ba3790ee095b3b4ac94767e1347fe424278a0b10acb3ff04fe", size = 221968, upload-time = "2026-01-25T12:59:31.424Z" },
{ url = "https://files.pythonhosted.org/packages/c4/cc/5aff1e1f80d55862442855517bb8ad8ad3a68639441ff6287dde6a58558b/coverage-7.13.2-cp314-cp314-win_amd64.whl", hash = "sha256:e4121a90823a063d717a96e0a0529c727fb31ea889369a0ee3ec00ed99bf6859", size = 222783, upload-time = "2026-01-25T12:59:33.118Z" },
{ url = "https://files.pythonhosted.org/packages/de/20/09abafb24f84b3292cc658728803416c15b79f9ee5e68d25238a895b07d9/coverage-7.13.2-cp314-cp314-win_arm64.whl", hash = "sha256:6873f0271b4a15a33e7590f338d823f6f66f91ed147a03938d7ce26efd04eee6", size = 221348, upload-time = "2026-01-25T12:59:34.939Z" },
{ url = "https://files.pythonhosted.org/packages/b6/60/a3820c7232db63be060e4019017cd3426751c2699dab3c62819cdbcea387/coverage-7.13.2-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:f61d349f5b7cd95c34017f1927ee379bfbe9884300d74e07cf630ccf7a610c1b", size = 219950, upload-time = "2026-01-25T12:59:36.624Z" },
{ url = "https://files.pythonhosted.org/packages/fd/37/e4ef5975fdeb86b1e56db9a82f41b032e3d93a840ebaf4064f39e770d5c5/coverage-7.13.2-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a43d34ce714f4ca674c0d90beb760eb05aad906f2c47580ccee9da8fe8bfb417", size = 220209, upload-time = "2026-01-25T12:59:38.339Z" },
{ url = "https://files.pythonhosted.org/packages/54/df/d40e091d00c51adca1e251d3b60a8b464112efa3004949e96a74d7c19a64/coverage-7.13.2-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:bff1b04cb9d4900ce5c56c4942f047dc7efe57e2608cb7c3c8936e9970ccdbee", size = 261576, upload-time = "2026-01-25T12:59:40.446Z" },
{ url = "https://files.pythonhosted.org/packages/c5/44/5259c4bed54e3392e5c176121af9f71919d96dde853386e7730e705f3520/coverage-7.13.2-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:6ae99e4560963ad8e163e819e5d77d413d331fd00566c1e0856aa252303552c1", size = 263704, upload-time = "2026-01-25T12:59:42.346Z" },
{ url = "https://files.pythonhosted.org/packages/16/bd/ae9f005827abcbe2c70157459ae86053971c9fa14617b63903abbdce26d9/coverage-7.13.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e79a8c7d461820257d9aa43716c4efc55366d7b292e46b5b37165be1d377405d", size = 266109, upload-time = "2026-01-25T12:59:44.073Z" },
{ url = "https://files.pythonhosted.org/packages/a2/c0/8e279c1c0f5b1eaa3ad9b0fb7a5637fc0379ea7d85a781c0fe0bb3cfc2ab/coverage-7.13.2-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:060ee84f6a769d40c492711911a76811b4befb6fba50abb450371abb720f5bd6", size = 260686, upload-time = "2026-01-25T12:59:45.804Z" },
{ url = "https://files.pythonhosted.org/packages/b2/47/3a8112627e9d863e7cddd72894171c929e94491a597811725befdcd76bce/coverage-7.13.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:3bca209d001fd03ea2d978f8a4985093240a355c93078aee3f799852c23f561a", size = 263568, upload-time = "2026-01-25T12:59:47.929Z" },
{ url = "https://files.pythonhosted.org/packages/92/bc/7ea367d84afa3120afc3ce6de294fd2dcd33b51e2e7fbe4bbfd200f2cb8c/coverage-7.13.2-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:6b8092aa38d72f091db61ef83cb66076f18f02da3e1a75039a4f218629600e04", size = 261174, upload-time = "2026-01-25T12:59:49.717Z" },
{ url = "https://files.pythonhosted.org/packages/33/b7/f1092dcecb6637e31cc2db099581ee5c61a17647849bae6b8261a2b78430/coverage-7.13.2-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:4a3158dc2dcce5200d91ec28cd315c999eebff355437d2765840555d765a6e5f", size = 260017, upload-time = "2026-01-25T12:59:51.463Z" },
{ url = "https://files.pythonhosted.org/packages/2b/cd/f3d07d4b95fbe1a2ef0958c15da614f7e4f557720132de34d2dc3aa7e911/coverage-7.13.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:3973f353b2d70bd9796cc12f532a05945232ccae966456c8ed7034cb96bbfd6f", size = 262337, upload-time = "2026-01-25T12:59:53.407Z" },
{ url = "https://files.pythonhosted.org/packages/e0/db/b0d5b2873a07cb1e06a55d998697c0a5a540dcefbf353774c99eb3874513/coverage-7.13.2-cp314-cp314t-win32.whl", hash = "sha256:79f6506a678a59d4ded048dc72f1859ebede8ec2b9a2d509ebe161f01c2879d3", size = 222749, upload-time = "2026-01-25T12:59:56.316Z" },
{ url = "https://files.pythonhosted.org/packages/e5/2f/838a5394c082ac57d85f57f6aba53093b30d9089781df72412126505716f/coverage-7.13.2-cp314-cp314t-win_amd64.whl", hash = "sha256:196bfeabdccc5a020a57d5a368c681e3a6ceb0447d153aeccc1ab4d70a5032ba", size = 223857, upload-time = "2026-01-25T12:59:58.201Z" },
{ url = "https://files.pythonhosted.org/packages/44/d4/b608243e76ead3a4298824b50922b89ef793e50069ce30316a65c1b4d7ef/coverage-7.13.2-cp314-cp314t-win_arm64.whl", hash = "sha256:69269ab58783e090bfbf5b916ab3d188126e22d6070bbfc93098fdd474ef937c", size = 221881, upload-time = "2026-01-25T13:00:00.449Z" },
{ url = "https://files.pythonhosted.org/packages/d2/db/d291e30fdf7ea617a335531e72294e0c723356d7fdde8fba00610a76bda9/coverage-7.13.2-py3-none-any.whl", hash = "sha256:40ce1ea1e25125556d8e76bd0b61500839a07944cc287ac21d5626f3e620cad5", size = 210943, upload-time = "2026-01-25T13:00:02.388Z" },
]
[[package]]
name = "idna"
version = "3.11"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/6f/6d/0703ccc57f3a7233505399edb88de3cbd678da106337b9fcde432b65ed60/idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902", size = 194582, upload-time = "2025-10-12T14:55:20.501Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
]
[[package]]
name = "igraph"
version = "1.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "texttable" },
]
sdist = { url = "https://files.pythonhosted.org/packages/23/be/56bef1919005b4caf1f71522b300d359f7faeb7ae93a3b0baa9b4f146a87/igraph-1.0.0.tar.gz", hash = "sha256:2414d0be2e4d77ee5357807d100974b40f6082bb1bb71988ec46cfb6728651ee", size = 5077105, upload-time = "2025-10-23T12:22:50.127Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a5/03/3278ad0ceb3ea0e84d8ae3a85bdded4d0e57853aeb802a200feb43847b93/igraph-1.0.0-cp39-abi3-macosx_10_15_x86_64.whl", hash = "sha256:c2cbc415e02523e5a241eecee82319080bf928a70b1ba299f3b3e25bf029b6d4", size = 2257415, upload-time = "2025-10-23T12:22:27.246Z" },
{ url = "https://files.pythonhosted.org/packages/0d/bc/6281ec7f9baaf71ee57c3b1748da2d3148d15d253e1a03006f204aa68ca5/igraph-1.0.0-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:1a27753cd80680a8f676c2d5a467aaa4a95e510b30748398ec4e4aeb982130e8", size = 2048555, upload-time = "2025-10-23T12:22:29.49Z" },
{ url = "https://files.pythonhosted.org/packages/2a/38/3cd6428a4ed4c09a56df05998438e7774fd1d799ee4fb8fc481674f5f7fc/igraph-1.0.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:a55dc3a2a4e3fc3eba42479910c1511bfc3ecb33cdf5f0406891fd85f14b5aee", size = 5314141, upload-time = "2025-10-23T12:22:31.023Z" },
{ url = "https://files.pythonhosted.org/packages/7d/da/dd2867c25adbb41563720f14b5fc895c98bf88be682a3faff4f7b3118d2a/igraph-1.0.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:2d04c2c76f686fb1f554ee35dfd3085f5e73b7965ba6b4cf06d53e66b1955522", size = 5683134, upload-time = "2025-10-23T12:22:32.423Z" },
{ url = "https://files.pythonhosted.org/packages/e5/40/243c118d34ab80382d7009c4dcb99b887384c3d2ce84d29eeac19e2a007a/igraph-1.0.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:f2b52dc1757fff0fed29a9f7a276d971a11db4211569ed78b9eab36288dfcc9d", size = 6211583, upload-time = "2025-10-23T12:22:34.238Z" },
{ url = "https://files.pythonhosted.org/packages/1d/b7/88f433819c54b496cb0315fce28e658970cb20ff5dbd52a5a605ce2888de/igraph-1.0.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:05c79a2a8fca695b2f217a6fa7f2549f896f757d4db41be32a055400cb19cc30", size = 6594509, upload-time = "2025-10-23T12:22:35.831Z" },
{ url = "https://files.pythonhosted.org/packages/7b/5d/8f7f6f619d374e959aa3664ebc4b24c10abc90c2e8efbed97f2623fadaf5/igraph-1.0.0-cp39-abi3-win32.whl", hash = "sha256:c2bce3cd472fec3dd9c4d8a3ea5b6b9be65fb30edf760beb4850760dd4f2d479", size = 2725406, upload-time = "2025-10-23T12:22:37.588Z" },
{ url = "https://files.pythonhosted.org/packages/af/77/a85b3745cf40a0572bae2de8cd9c2a2a8af78e5cf3e880fc0a249114e609/igraph-1.0.0-cp39-abi3-win_amd64.whl", hash = "sha256:faeff8ede0cf15eb4ded44b0fcea6e1886740146e60504c24ad2da14e0939563", size = 3221663, upload-time = "2025-10-23T12:22:39.404Z" },
{ url = "https://files.pythonhosted.org/packages/ef/7e/5df541c37bdf6493035e89c22bd53f30d99b291bcda6c78e9a8afeecec2b/igraph-1.0.0-cp39-abi3-win_arm64.whl", hash = "sha256:b607cafc24b10a615e713ee96e58208ef27e0764af80140c7cc45d4724a3f2df", size = 2785701, upload-time = "2025-10-23T12:22:41.03Z" },
]
[[package]]
name = "iniconfig"
version = "2.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
]
[[package]]
name = "netbox-zabbix-sync"
source = { editable = "." }
dependencies = [
{ name = "igraph" },
{ name = "pynetbox" },
{ name = "zabbix-utils" },
]
[package.dev-dependencies]
dev = [
{ name = "pytest" },
{ name = "pytest-cov" },
{ name = "ruff" },
{ name = "ty" },
]
[package.metadata]
requires-dist = [
{ name = "igraph", specifier = ">=1.0.0" },
{ name = "pynetbox", specifier = ">=7.6.1" },
{ name = "zabbix-utils", specifier = ">=2.0.4" },
]
[package.metadata.requires-dev]
dev = [
{ name = "pytest", specifier = ">=9.0.2" },
{ name = "pytest-cov", specifier = ">=7.0.0" },
{ name = "ruff", specifier = ">=0.14.14" },
{ name = "ty", specifier = ">=0.0.14" },
]
[[package]]
name = "packaging"
version = "26.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/65/ee/299d360cdc32edc7d2cf530f3accf79c4fca01e96ffc950d8a52213bd8e4/packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4", size = 143416, upload-time = "2026-01-21T20:50:39.064Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/packaging-26.0-py3-none-any.whl", hash = "sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529", size = 74366, upload-time = "2026-01-21T20:50:37.788Z" },
]
[[package]]
name = "pluggy"
version = "1.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
]
[[package]]
name = "pygments"
version = "2.19.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
]
[[package]]
name = "pynetbox"
version = "7.6.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "packaging" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/11/0a/f0b733d44c4793ee3be0ee142a8ac92cfdd6232f64e4ae2dda256a08fb41/pynetbox-7.6.1.tar.gz", hash = "sha256:8a7ee99b89d08848be134793015afc17c85711a18e8c7e67c353362e1c8d7fc7", size = 92489, upload-time = "2026-01-28T16:50:50.223Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c0/f6/a11612421017fdd8f93e653ea1505d4e64e9a24de0974c53a36b154cd945/pynetbox-7.6.1-py3-none-any.whl", hash = "sha256:daa064b1cc4e7d871124ddca1e0de3a36e7ff9e0814fb046a90e36024fd59e4b", size = 39319, upload-time = "2026-01-28T16:50:49.234Z" },
]
[[package]]
name = "pytest"
version = "9.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "iniconfig" },
{ name = "packaging" },
{ name = "pluggy" },
{ name = "pygments" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
]
[[package]]
name = "pytest-cov"
version = "7.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "coverage" },
{ name = "pluggy" },
{ name = "pytest" },
]
sdist = { url = "https://files.pythonhosted.org/packages/5e/f7/c933acc76f5208b3b00089573cf6a2bc26dc80a8aece8f52bb7d6b1855ca/pytest_cov-7.0.0.tar.gz", hash = "sha256:33c97eda2e049a0c5298e91f519302a1334c26ac65c1a483d6206fd458361af1", size = 54328, upload-time = "2025-09-09T10:57:02.113Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ee/49/1377b49de7d0c1ce41292161ea0f721913fa8722c19fb9c1e3aa0367eecb/pytest_cov-7.0.0-py3-none-any.whl", hash = "sha256:3b8e9558b16cc1479da72058bdecf8073661c7f57f7d3c5f22a1c23507f2d861", size = 22424, upload-time = "2025-09-09T10:57:00.695Z" },
]
[[package]]
name = "requests"
version = "2.32.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
{ name = "charset-normalizer" },
{ name = "idna" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c9/74/b3ff8e6c8446842c3f5c837e9c3dfcfe2018ea6ecef224c710c85ef728f4/requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf", size = 134517, upload-time = "2025-08-18T20:46:02.573Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" },
]
[[package]]
name = "ruff"
version = "0.14.14"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/2e/06/f71e3a86b2df0dfa2d2f72195941cd09b44f87711cb7fa5193732cb9a5fc/ruff-0.14.14.tar.gz", hash = "sha256:2d0f819c9a90205f3a867dbbd0be083bee9912e170fd7d9704cc8ae45824896b", size = 4515732, upload-time = "2026-01-22T22:30:17.527Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d2/89/20a12e97bc6b9f9f68343952da08a8099c57237aef953a56b82711d55edd/ruff-0.14.14-py3-none-linux_armv6l.whl", hash = "sha256:7cfe36b56e8489dee8fbc777c61959f60ec0f1f11817e8f2415f429552846aed", size = 10467650, upload-time = "2026-01-22T22:30:08.578Z" },
{ url = "https://files.pythonhosted.org/packages/a3/b1/c5de3fd2d5a831fcae21beda5e3589c0ba67eec8202e992388e4b17a6040/ruff-0.14.14-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:6006a0082336e7920b9573ef8a7f52eec837add1265cc74e04ea8a4368cd704c", size = 10883245, upload-time = "2026-01-22T22:30:04.155Z" },
{ url = "https://files.pythonhosted.org/packages/b8/7c/3c1db59a10e7490f8f6f8559d1db8636cbb13dccebf18686f4e3c9d7c772/ruff-0.14.14-py3-none-macosx_11_0_arm64.whl", hash = "sha256:026c1d25996818f0bf498636686199d9bd0d9d6341c9c2c3b62e2a0198b758de", size = 10231273, upload-time = "2026-01-22T22:30:34.642Z" },
{ url = "https://files.pythonhosted.org/packages/a1/6e/5e0e0d9674be0f8581d1f5e0f0a04761203affce3232c1a1189d0e3b4dad/ruff-0.14.14-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f666445819d31210b71e0a6d1c01e24447a20b85458eea25a25fe8142210ae0e", size = 10585753, upload-time = "2026-01-22T22:30:31.781Z" },
{ url = "https://files.pythonhosted.org/packages/23/09/754ab09f46ff1884d422dc26d59ba18b4e5d355be147721bb2518aa2a014/ruff-0.14.14-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3c0f18b922c6d2ff9a5e6c3ee16259adc513ca775bcf82c67ebab7cbd9da5bc8", size = 10286052, upload-time = "2026-01-22T22:30:24.827Z" },
{ url = "https://files.pythonhosted.org/packages/c8/cc/e71f88dd2a12afb5f50733851729d6b571a7c3a35bfdb16c3035132675a0/ruff-0.14.14-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1629e67489c2dea43e8658c3dba659edbfd87361624b4040d1df04c9740ae906", size = 11043637, upload-time = "2026-01-22T22:30:13.239Z" },
{ url = "https://files.pythonhosted.org/packages/67/b2/397245026352494497dac935d7f00f1468c03a23a0c5db6ad8fc49ca3fb2/ruff-0.14.14-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:27493a2131ea0f899057d49d303e4292b2cae2bb57253c1ed1f256fbcd1da480", size = 12194761, upload-time = "2026-01-22T22:30:22.542Z" },
{ url = "https://files.pythonhosted.org/packages/5b/06/06ef271459f778323112c51b7587ce85230785cd64e91772034ddb88f200/ruff-0.14.14-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:01ff589aab3f5b539e35db38425da31a57521efd1e4ad1ae08fc34dbe30bd7df", size = 12005701, upload-time = "2026-01-22T22:30:20.499Z" },
{ url = "https://files.pythonhosted.org/packages/41/d6/99364514541cf811ccc5ac44362f88df66373e9fec1b9d1c4cc830593fe7/ruff-0.14.14-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1cc12d74eef0f29f51775f5b755913eb523546b88e2d733e1d701fe65144e89b", size = 11282455, upload-time = "2026-01-22T22:29:59.679Z" },
{ url = "https://files.pythonhosted.org/packages/ca/71/37daa46f89475f8582b7762ecd2722492df26421714a33e72ccc9a84d7a5/ruff-0.14.14-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb8481604b7a9e75eff53772496201690ce2687067e038b3cc31aaf16aa0b974", size = 11215882, upload-time = "2026-01-22T22:29:57.032Z" },
{ url = "https://files.pythonhosted.org/packages/2c/10/a31f86169ec91c0705e618443ee74ede0bdd94da0a57b28e72db68b2dbac/ruff-0.14.14-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:14649acb1cf7b5d2d283ebd2f58d56b75836ed8c6f329664fa91cdea19e76e66", size = 11180549, upload-time = "2026-01-22T22:30:27.175Z" },
{ url = "https://files.pythonhosted.org/packages/fd/1e/c723f20536b5163adf79bdd10c5f093414293cdf567eed9bdb7b83940f3f/ruff-0.14.14-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e8058d2145566510790eab4e2fad186002e288dec5e0d343a92fe7b0bc1b3e13", size = 10543416, upload-time = "2026-01-22T22:30:01.964Z" },
{ url = "https://files.pythonhosted.org/packages/3e/34/8a84cea7e42c2d94ba5bde1d7a4fae164d6318f13f933d92da6d7c2041ff/ruff-0.14.14-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:e651e977a79e4c758eb807f0481d673a67ffe53cfa92209781dfa3a996cf8412", size = 10285491, upload-time = "2026-01-22T22:30:29.51Z" },
{ url = "https://files.pythonhosted.org/packages/55/ef/b7c5ea0be82518906c978e365e56a77f8de7678c8bb6651ccfbdc178c29f/ruff-0.14.14-py3-none-musllinux_1_2_i686.whl", hash = "sha256:cc8b22da8d9d6fdd844a68ae937e2a0adf9b16514e9a97cc60355e2d4b219fc3", size = 10733525, upload-time = "2026-01-22T22:30:06.499Z" },
{ url = "https://files.pythonhosted.org/packages/6a/5b/aaf1dfbcc53a2811f6cc0a1759de24e4b03e02ba8762daabd9b6bd8c59e3/ruff-0.14.14-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:16bc890fb4cc9781bb05beb5ab4cd51be9e7cb376bf1dd3580512b24eb3fda2b", size = 11315626, upload-time = "2026-01-22T22:30:36.848Z" },
{ url = "https://files.pythonhosted.org/packages/2c/aa/9f89c719c467dfaf8ad799b9bae0df494513fb21d31a6059cb5870e57e74/ruff-0.14.14-py3-none-win32.whl", hash = "sha256:b530c191970b143375b6a68e6f743800b2b786bbcf03a7965b06c4bf04568167", size = 10502442, upload-time = "2026-01-22T22:30:38.93Z" },
{ url = "https://files.pythonhosted.org/packages/87/44/90fa543014c45560cae1fffc63ea059fb3575ee6e1cb654562197e5d16fb/ruff-0.14.14-py3-none-win_amd64.whl", hash = "sha256:3dde1435e6b6fe5b66506c1dff67a421d0b7f6488d466f651c07f4cab3bf20fd", size = 11630486, upload-time = "2026-01-22T22:30:10.852Z" },
{ url = "https://files.pythonhosted.org/packages/9e/6a/40fee331a52339926a92e17ae748827270b288a35ef4a15c9c8f2ec54715/ruff-0.14.14-py3-none-win_arm64.whl", hash = "sha256:56e6981a98b13a32236a72a8da421d7839221fa308b223b9283312312e5ac76c", size = 10920448, upload-time = "2026-01-22T22:30:15.417Z" },
]
[[package]]
name = "texttable"
version = "1.7.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/1c/dc/0aff23d6036a4d3bf4f1d8c8204c5c79c4437e25e0ae94ffe4bbb55ee3c2/texttable-1.7.0.tar.gz", hash = "sha256:2d2068fb55115807d3ac77a4ca68fa48803e84ebb0ee2340f858107a36522638", size = 12831, upload-time = "2023-10-03T09:48:12.272Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/24/99/4772b8e00a136f3e01236de33b0efda31ee7077203ba5967fcc76da94d65/texttable-1.7.0-py2.py3-none-any.whl", hash = "sha256:72227d592c82b3d7f672731ae73e4d1f88cd8e2ef5b075a7a7f01a23a3743917", size = 10768, upload-time = "2023-10-03T09:48:10.434Z" },
]
[[package]]
name = "ty"
version = "0.0.14"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/af/57/22c3d6bf95c2229120c49ffc2f0da8d9e8823755a1c3194da56e51f1cc31/ty-0.0.14.tar.gz", hash = "sha256:a691010565f59dd7f15cf324cdcd1d9065e010c77a04f887e1ea070ba34a7de2", size = 5036573, upload-time = "2026-01-27T00:57:31.427Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/99/cb/cc6d1d8de59beb17a41f9a614585f884ec2d95450306c173b3b7cc090d2e/ty-0.0.14-py3-none-linux_armv6l.whl", hash = "sha256:32cf2a7596e693094621d3ae568d7ee16707dce28c34d1762947874060fdddaa", size = 10034228, upload-time = "2026-01-27T00:57:53.133Z" },
{ url = "https://files.pythonhosted.org/packages/f3/96/dd42816a2075a8f31542296ae687483a8d047f86a6538dfba573223eaf9a/ty-0.0.14-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:f971bf9805f49ce8c0968ad53e29624d80b970b9eb597b7cbaba25d8a18ce9a2", size = 9939162, upload-time = "2026-01-27T00:57:43.857Z" },
{ url = "https://files.pythonhosted.org/packages/ff/b4/73c4859004e0f0a9eead9ecb67021438b2e8e5fdd8d03e7f5aca77623992/ty-0.0.14-py3-none-macosx_11_0_arm64.whl", hash = "sha256:45448b9e4806423523268bc15e9208c4f3f2ead7c344f615549d2e2354d6e924", size = 9418661, upload-time = "2026-01-27T00:58:03.411Z" },
{ url = "https://files.pythonhosted.org/packages/58/35/839c4551b94613db4afa20ee555dd4f33bfa7352d5da74c5fa416ffa0fd2/ty-0.0.14-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ee94a9b747ff40114085206bdb3205a631ef19a4d3fb89e302a88754cbbae54c", size = 9837872, upload-time = "2026-01-27T00:57:23.718Z" },
{ url = "https://files.pythonhosted.org/packages/41/2b/bbecf7e2faa20c04bebd35fc478668953ca50ee5847ce23e08acf20ea119/ty-0.0.14-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6756715a3c33182e9ab8ffca2bb314d3c99b9c410b171736e145773ee0ae41c3", size = 9848819, upload-time = "2026-01-27T00:57:58.501Z" },
{ url = "https://files.pythonhosted.org/packages/be/60/3c0ba0f19c0f647ad9d2b5b5ac68c0f0b4dc899001bd53b3a7537fb247a2/ty-0.0.14-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:89d0038a2f698ba8b6fec5cf216a4e44e2f95e4a5095a8c0f57fe549f87087c2", size = 10324371, upload-time = "2026-01-27T00:57:29.291Z" },
{ url = "https://files.pythonhosted.org/packages/24/32/99d0a0b37d0397b0a989ffc2682493286aa3bc252b24004a6714368c2c3d/ty-0.0.14-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2c64a83a2d669b77f50a4957039ca1450626fb474619f18f6f8a3eb885bf7544", size = 10865898, upload-time = "2026-01-27T00:57:33.542Z" },
{ url = "https://files.pythonhosted.org/packages/1a/88/30b583a9e0311bb474269cfa91db53350557ebec09002bfc3fb3fc364e8c/ty-0.0.14-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:242488bfb547ef080199f6fd81369ab9cb638a778bb161511d091ffd49c12129", size = 10555777, upload-time = "2026-01-27T00:58:05.853Z" },
{ url = "https://files.pythonhosted.org/packages/cd/a2/cb53fb6325dcf3d40f2b1d0457a25d55bfbae633c8e337bde8ec01a190eb/ty-0.0.14-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4790c3866f6c83a4f424fc7d09ebdb225c1f1131647ba8bdc6fcdc28f09ed0ff", size = 10412913, upload-time = "2026-01-27T00:57:38.834Z" },
{ url = "https://files.pythonhosted.org/packages/42/8f/f2f5202d725ed1e6a4e5ffaa32b190a1fe70c0b1a2503d38515da4130b4c/ty-0.0.14-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:950f320437f96d4ea9a2332bbfb5b68f1c1acd269ebfa4c09b6970cc1565bd9d", size = 9837608, upload-time = "2026-01-27T00:57:55.898Z" },
{ url = "https://files.pythonhosted.org/packages/f7/ba/59a2a0521640c489dafa2c546ae1f8465f92956fede18660653cce73b4c5/ty-0.0.14-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:4a0ec3ee70d83887f86925bbc1c56f4628bd58a0f47f6f32ddfe04e1f05466df", size = 9884324, upload-time = "2026-01-27T00:57:46.786Z" },
{ url = "https://files.pythonhosted.org/packages/03/95/8d2a49880f47b638743212f011088552ecc454dd7a665ddcbdabea25772a/ty-0.0.14-py3-none-musllinux_1_2_i686.whl", hash = "sha256:a1a4e6b6da0c58b34415955279eff754d6206b35af56a18bb70eb519d8d139ef", size = 10033537, upload-time = "2026-01-27T00:58:01.149Z" },
{ url = "https://files.pythonhosted.org/packages/e9/40/4523b36f2ce69f92ccf783855a9e0ebbbd0f0bb5cdce6211ee1737159ed3/ty-0.0.14-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:dc04384e874c5de4c5d743369c277c8aa73d1edea3c7fc646b2064b637db4db3", size = 10495910, upload-time = "2026-01-27T00:57:26.691Z" },
{ url = "https://files.pythonhosted.org/packages/08/d5/655beb51224d1bfd4f9ddc0bb209659bfe71ff141bcf05c418ab670698f0/ty-0.0.14-py3-none-win32.whl", hash = "sha256:b20e22cf54c66b3e37e87377635da412d9a552c9bf4ad9fc449fed8b2e19dad2", size = 9507626, upload-time = "2026-01-27T00:57:41.43Z" },
{ url = "https://files.pythonhosted.org/packages/b6/d9/c569c9961760e20e0a4bc008eeb1415754564304fd53997a371b7cf3f864/ty-0.0.14-py3-none-win_amd64.whl", hash = "sha256:e312ff9475522d1a33186657fe74d1ec98e4a13e016d66f5758a452c90ff6409", size = 10437980, upload-time = "2026-01-27T00:57:36.422Z" },
{ url = "https://files.pythonhosted.org/packages/ad/0c/186829654f5bfd9a028f6648e9caeb11271960a61de97484627d24443f91/ty-0.0.14-py3-none-win_arm64.whl", hash = "sha256:b6facdbe9b740cb2c15293a1d178e22ffc600653646452632541d01c36d5e378", size = 9885831, upload-time = "2026-01-27T00:57:49.747Z" },
]
[[package]]
name = "urllib3"
version = "2.6.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c7/24/5f1b3bdffd70275f6661c76461e25f024d5a38a46f04aaca912426a2b1d3/urllib3-2.6.3.tar.gz", hash = "sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed", size = 435556, upload-time = "2026-01-07T16:24:43.925Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/39/08/aaaad47bc4e9dc8c725e68f9d04865dbcb2052843ff09c97b08904852d84/urllib3-2.6.3-py3-none-any.whl", hash = "sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4", size = 131584, upload-time = "2026-01-07T16:24:42.685Z" },
]
[[package]]
name = "zabbix-utils"
version = "2.0.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/76/d6/5e52b23074938833bf37426940965597eb8057bf5860014deda997b3c317/zabbix_utils-2.0.4.tar.gz", hash = "sha256:e46b15c5b51ade4692aa009939372bce68871cf64d6572e96e8cb193cb0590ea", size = 28658, upload-time = "2025-12-17T10:29:50.067Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/64/e5/9c655df064fa6fdf1796db5c6e5505b7d19695ce8fda34dab326a84f86cf/zabbix_utils-2.0.4-py3-none-any.whl", hash = "sha256:103e07c54d37c775781d7030788a5f9b2a361420963d7a458feae96892fb4c48", size = 37833, upload-time = "2025-12-17T10:29:48.293Z" },
]