Plinko was an experiment with Prefix Trees designed to quickly, and with little effort, parse incoming machine data (i.e. log data) and then "do something" with it.

I started this project years ago when I worked at a SIEM company. I saw that most SIEM products rely on placing retrievers and/or forwarders on each device you wish to collect logs from. These forwarders need to know what they are getting and to tell the main datastore what the data is being sent. A few cloud based solutions now attempt to accept unknown data types, but none that I've seen do it correctly. Neither SumoLogic nor Splunk Storm or Loggily successfully parse incoming unknown data type, which really hinder their service. They also do liner matching on data types by trying to do sub matches for events and event types within the log line to try and identify it. This method doesn't work well, as you can seen yourself by using any of those services.

The idea for Plinko is to make it listen on a port, and then accept any data you want to throw at it. It will accept mixed data streams or single streams, it makes no difference. Plinko, when it starts, reads from it's library of STL files. In each STL file contains a regex, named fields where the regex captures go, and then an optional anonymous function to further parse the payloads of dynamic log lines (i.e. ssh messages always start the same, but the message payload varies depending on what happens - connection, disconnection, failed logins, etc).

Plinko will takes these STL files, tokenize all the captures, compile the regexs, collapse tokens that are the same (i.e. syslog headers are always the same), and then builds it's tree with the tokens. When a log line comes in, Plinko starts with the most common and longest token first until it finds a match. Upon a match, the remainder of the log line is then passed down that branch and then checked against the next possible tokens (i.e. matched a syslog header -> ssh|sudo|su|login|etc -> next token) until Plinko has parsed and identified the log.

Once Plinko parses the line, you have a choice of outputs. You can tell Plinko to export the data as a pure key=>value Perl hash, or have it converted to a JSON object. Those data types can then be printed to STDOUT, put into a large mixed file, put into separate files per source type, or then pushed out to a socket (i.e. to a datastore).

As of v3.1, PlinkoNet.pm now includes authentication before you can send data. See the README file for more info.

Please read the README file included for a quick start. You can also play with the constructors to get a better idea of how it works and make it act differently.

Required Plinko dependencies:

Required Plinko::PlinkoNet dependencies:

Required Plinko::Rest dependencies:

Screenshot thumbnail
Starting the Plinko engine
Screenshot thumbnail
After the Plinko engine has parsed the data
Screenshot thumbnail
Starting up the PlinkoNet engine
Screenshot thumbnail
After the PlinkoNet engine has parsed the data

Project Admins: