My initial and still current (though downsized) hopes for Syslogd2
To give something back to the Linux community in exchange for the gift of a viable (and incorruptible) open-source alternative to MS Windows and the suffocating (and insufferable) IT environment that was life under MSWindows before Linux and GNU (open-source in general).
Share some solutions I've created to problems encountered over my Linux career.
My initial plans for Syslogd2 before reality bit me
Syslogd2 would be open-source (GPLv2 (preferred) or GPLv3) (Syslogd2 is initially being released as GPLv2)
Syslogd2 would be released after a few months of development (That was 2 decades ago)
I naively thought I could understand multi-threaded programming and data-communications programming in a "mere" several months -- a year at the outside).
Syslogd2 would be the first component of a distributed network-management system based on syslog instead of ICMP and what has become an MSWIndows-dominated industry based on selling expensive hardware preloaded with MSWindows licenses at outrageous profit margins
Syslogd2 was designed to be fielded as a distributed "instrumentation package" for corporate hosts and network equipment.
Due to observed structural and design failures of other (centralized) popular network-management systems when deployed on large corporate networks, it became clear even before the Syslogd2 prototype code was written back in 2002 that no centralized network-management system can hope to process (nor can network links support) the volume of syslog traffic from literally thousands of network switches and routers that are typically found on large campus networks (corporate headquarters, college campuses, etc).
To reduce the network traffic to manageable levels, it is necessary to reduce syslog output levels to the point that no meaningful events arrive at the central location.
To get meaningful syslog events, it is necessary -- using traditional (host-logger) syslog-daemons -- to raise the individual device traffic levels which (when aggregated over a couple-thousand devices) results in congested network-links and switch ports at the central syslog receiver. This in turn results in lost data and (often) no data at all as the CPUs (and/or network) are simply overwhelmed by the traffic volume.
The only apparent solution to this apparent catch-22 situation is to distribute the syslog collectors as close to the generating devices as possible so the syslog collectors can receive the maximum amount of syslog traffic consistent with good network performance metrics. Each syslog collector then acts to provide "forensic files" of the incoming data while applying predefined filters to send only events-of-interest to the downstream central processing components.
The "forensic" files would be kept for some number of days (depending on disk space and traffic volume), giving both network and security management a "window" during which all possible network information has been captured should any outage or cyber-attack need investigating.
The output-filters being applied to the data would act to minimize overall traffic volume and to concentrate the value of what data DOES get forwarded to central analysis platforms.
If done right, once deployed the syslog data-collectors could be deployed once to support a variety of vendor analysis engines (or no engine at all).
They would be (ideally) open-source code on cheap (possibly obsolete from a MSWindows usability standpoint) Intel/AMD hardware running an open-source Linux OS. The requirements for Syslogd2 are (outside of a small handful of key locations: firewalls, central collector, etc) not significantly taxing for even the smallest Linux-based hosts.
Once deployed, these syslog-collectors can also be called on in the future to serve additional distributed purposes (NIS slaves, DNS secondary servers, local NFS hosts, etc).g
Other components of this system would include:
A "pinger" (for network-outage reporting) based on the concept of the MSWindows eye-candy that I have come to despise but with the following major differences: (This is still a valid goal, but currently at a very low priority pending acceptance and usage of Syslogd2 and DBD2)
It would be deployed as a distributed application - potentially sharing the Syslogd2 distributed platforms. As such, it would be more accurate than centralized 'pingers' while creating much less network traffic and congestion due to its operation.
Individual 'test stations' would be closer to the devices being tested to 'locallize' the added network-management-induced traffic.
Tests could be tailored to individual hosts to allow host-based applications to be evaluated rather than just the host operating system.
Some accomodation would be made for disabling alerts during maintenance windows or during recovery from significant network outages.
It would allow the use of connections to service-ports to verify that applications are actually responding instead of just relying on an ICMP response from the host.
It would be open-source code to make deployment affordable and available for any size network.
A "parser" that would extract and mark user-specified data-fields from syslog messages. This was started and postponed until I could more precisely identify requirements and until Syslogd2 and DBD2 got some run-time experience in real-life networks.
A "database-insertion-component" that would receive (filtered and parsed) syslog data and 'feed' a database. This is being realized with my DBD2 project.
That database would initially be MySQL, but would later expand to multiple database-types and vendors.
Just with Syslogd2 and this database-feeder application (now known as DBD2), I felt that two major syslog-based-management issues would have been tamed and a path would be opened for open-source-based network-management products and service contracts throughout the IT industry.
The purpose of the database-feeder app would be to provide a high-end, generic, multi-threaded (multi-dabase) tool to open-source network devlopers to create management databases from selected (filtered) syslog data.
A versatile (open-source) analysis engine that would both provide some simple reports and that would serve as a 'jumping-off-point' for local customization and reporting. This has since been dropped from my list due to lack of time and focus.
A simple webpage that would (again) serve as a template or 'jumping-off point" for a more customer-focused, customized application. This has since been dropped from my list.