- Notifications
You must be signed in to change notification settings - Fork 515
[panw] Added support of PAN-OS 10 support #3527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| Pinging @elastic/security-external-integrations (Team:Security-External Integrations) |
🌐 Coverage report
|
packages/panw/data_stream/panos/elasticsearch/ingest_pipeline/authentication.yml Outdated Show resolved Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason not to do these in the csv processor?
packages/panw/data_stream/panos/elasticsearch/ingest_pipeline/config.yml Outdated Show resolved Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to duplicate the ECS fields into the PanOS fields? Is it to explicitly show the provenance of the ECS fields. If that's the case, maybe reverse the assignments (csv into the panos fields and copy into the ecs).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverse these so panw.panos.host.ip and panw.panos.device_name are set in the csv.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add description.
b0aa709 to 1820d56 Compare 1820d56 to 976e3a9 Compare | - convert: | ||
| field: source.ip | ||
| type: ip | ||
| ignore_failure: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this configuration, the processor does nothing. source.ip is already a string and after "conversion" to type: ip it will still be a string. The reason convert has a type: ip is to perform validation. On failure that validation can result it the value not being copied to target_field or it can result in the execution of an on_failure handler. But this has neither of those.
So I think in this case I think you want to remove ignore_failure, add an on_failure handler to remove the invalid field, and add ignore_missing: true.
| - convert: | ||
| field: panw.panos.source.port | ||
| type: long | ||
| ignore_failure: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming the mapping type is a long, then leaving the bad field value present by ignore_failure: true will result in an event that cannot be indexed (the only exception is if the value is a null). IMO it's better to deal with those issues in the pipeline than let an event pass through that will fail to index. My suggestion is to remove the field with an on_failure, or if you don't want a silent failure you can remove the field and append an error.message.
| # Add '-' in Mac Address and convert it into uppercase | ||
| - gsub: | ||
| field: panw.panos.src.mac | ||
| pattern: '[-:.]' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pattern: '[-:.]' | |
| pattern: '[:.]' |
| if: 'ctx?.destination?.nat?.ip == "0.0.0.0" && ctx?.destination?.nat?.port == 0' | ||
| if: 'ctx.destination?.nat?.ip == "0.0.0.0" && ctx.destination?.nat?.port == 0' | ||
| | ||
| #Remove custom fields |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| #Remove custom fields | |
| # Remove panw.panos fields that are copied into an ECS field. |
| ignore_failure: true | ||
| source: | | ||
| Map map = new HashMap(); | ||
| map.put("add", "cmd-add"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This data should be put into params so this a new map does not need to be allocated on every invocation of the processor. Data can be accessed through params.get(key) or params[key] in the script.
- script: params: add: cmd-add clone: cmd-clone ... | @@ -0,0 +1,685 @@ | |||
| { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please unlink the dashboards to embed any visualizations that are not shared across dashboards.
| Hey @andrewkroh, |
| I am testing with real live data. All but THREAT seems to work great. |
| @LaZyDK is that event missing the final quote in the original event? AFAICS the pipeline here is not removing it. |
| It seems like it yes. |
| So are is there other significant syntax that is missing? It sounds like there might be. |
| I will compare it to some of the other logs, and get back to you in a few hours. |
| I did compare to other events and found that some of my messages are being cut in length. |
| As per RFC5424: |
| @LaZyDK What input were you using? 8k should be fine for the inputs supported. If you temporarily enable debug for the Agent (https://www.elastic.co/guide/en/fleet/current/elastic-agent-logging.html#agent-logging-levels) do you see truncated messages on the Beat/Agent side by looking at the logs? |
| Looking at the event I sent you earlier it is over 10000 chars - without the syslog headers. |
| I was working with @LaZyDK do figure out where the problem lies. At the moment it looks like PAN-OS might be sending the message without the final escaped quote. He's going to use the custom UDP input integration to capture the data and see if maybe it's the |
| Speaking of the local processing contained in the We probably want to do this outside of this change and this probably affects other integrations that are using |
That sounds good to me. It's true that we'll probably want to evaluate this on a per-integration case to be safe, but that should be the general way of doing it. |
| It's also worth looking at RFC 5426 (Transmission of Syslog Messages over UDP). It clearly states that there must only be one message per datagram, in which that message should either fit within that bound or be truncated (which seems to be the case here). The size of the datagram will of course rely on external factors (what PAN-OS is doing, MTU, etc). Of course PAN-OS could ignore all of this, but if it tries to send the message over multiple datagrams, the UDP input and syslog processor will NOT reassemble the message. If we expect messages to exceed MTU (among other factors), we should be using TCP here, not UDP. |
| Hey @LaZyDK, In doc-https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/user-id/user-id-concepts/user-mapping/syslog, it is mentioned that We recommend using TCP input since the default max_message_size is 20MiB (https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-tcp.html#filebeat-input-tcp-tcp-max-message-size), but still if you want to use UDP input, we can set value for max_message_size in udp.yml.hbs file. |
| @vinit-elastic Let's expose the max_message_size as an advanced setting for TCP and UDP and set the default to allow receiving the expected size of the pan-os 10.x messages. Let's default it to @LaZyDK said that increasing the max_message_size when using the custom udp input integration fixed the truncation issue. |
| Sure @andrewkroh, it makes sense. I'll update the PR. 👍🏻 |
| Well done. Test and push :D |
What does this PR do?
This PR is adding support of PAN-OS 10 to existing panw integration.
Added support for new log types. (Authentication, Config, Correlated Event logs, Decryption logs, GTP, IP tag, SCTP, System, Tunnel Inspection).
Added toggle to remove duplicate custom fields for ECS mapped field.
Mapped fields according to the ECS schema and added Fields metadata in the appropriate yml files.
Added a new set of inbuilt dashboards and visualizations.
Added test for pipeline for the data stream.
Added system test cases for the data stream.
Checklist
changelog.ymlfile.Author's Checklist
How to test this PR locally
Clone integrations repo.
Install elastic-package locally.
Start elastic stack using elastic-package.
Move to integrations/packages/panw directory.
Run the following command to run tests.
elastic-package test
Related issues
Fixed following issues in current integration.
Known Issues
uri_partsprocessor is not used.client.ipandsource.iphowever these fields are not mapped in the pipeline.Screenshots