Many of today’s IT and network operations teams can no longer keep pace with the increasing demands of their digitally empowered users and the rate of change throughout the network. In order to stay relevant and be able to quickly respond to any need or incident that might arise, it’s vital for network professionals to take a more innovative approach to network optimization. And that’s where NetOps comes in.
NetOps is based loosely on the application of DevOps culture, principles and tools borrowed from the software development and deployment world to make network infrastructure more scalable, dynamic and resilient. NetOps may seem like a simple idea; however, there is more to it than meets the eye.
SourceForge recently caught up with Rick Stevenson, the Chief Research Officer at Opengear, a provider of solutions for secure, resilient access and automation to critical IT infrastructure, to discuss the importance of network resilience for organizations and the possibilities and promise of NetOps. Stevenson also shares how Opengear’s NetOps Automation Platform gives businesses and their network teams the edge they need.
Q: Gives us a quick overview of Opengear’s story – when and why was Opengear founded, how have you grown, and what types of customers do you work with?
A: Opengear was founded in 2004 after our previous venture, Snapgear, was acquired by Cyberguard (which was subsequently acquired by Secure Computing, which was then acquired by McAfee…) Snapgear designed and manufactured network security appliances based on embedded Linux back in the days when this was bleeding edge. Now, of course, it would be the exception to use anything else! We started Opengear with the intention of combining our hardware design and Open Source skills to innovate and disrupt in new, and non-compete free, markets.
Our first push was into server management with the okvm project (okvm.sourceforge.net.) As well as console manager software, we attempted to create a KVM over IP solution with open source hardware as well as software. This may have been too radical an idea for the time and failing to find enough traction in the developer community we parked the project and pivoted.
Opengear’s first commercial products included simple serial console management appliances based, of course, on embedded Linux. We continued to add new capabilities and collaborated with and contributed to several other open source projects including Nagios and NUT (Network UPS Tools.) The console management world was quite insular and slow-moving, and we shook things up with innovations like SDTConnector for simplified, secure tunneling of remote services (https://sourceforge.net/projects/sdtcon/) and the addition of integrated cellular networking.
One innovation in the security space that I’m particularly proud of was Opengear’s sole sponsorship in 2008 of an extension to the OpenSSL FIPS Module v1.2 validation to include, for the first time, support for cross-compilation and ARM processors. Previously, the OpenSSL FIPS 140-2 validation had been limited to x86 processors and this extended the applicability of this important security standard to the whole embedded world.
Opengear is self-funded and has grown organically at a rapid pace, achieving profitability only a few years after we started. We now have a wide portfolio of sophisticated out-of-band management solutions suitable for deployment anywhere from the datacenter to remote offices and edge networks. Our most remote installation is almost certainly the one in the Aloha Cabled Observatory, over 4,700 metres below the Pacific Ocean. We’re not aware of any of our products making it into space, but COSMOS @ DiRAC, the UK National Cosmology Supercomputer, selected Opengear technology to provide secure remote management.
Opengear products have horizontal applicability and provide value anywhere that network, computing or cloud infrastructure is critical to business. Our initial, early adopter customers were companies with especially stringent uptime requirements like Financials and Telcos, but we have a wide footprint in many industries including Retail, Media & Broadcast, Government, Education, Healthcare, Automotive and large Tech companies.
Q: Network resilience and secure network access have never been more important for enterprises. What’s your take on current network management issues that more businesses should be more aware of – or better prepared for?
A: There’s a rapidly accelerating trend away from expensive fixed line WAN connections in Enterprise networks to the more open, flexible and cost-effective use of software-defined wide area network (SD-WAN) technologies. SD-WAN offers a lot of advantages over expensive “old” technologies such as T-1 and MPLS circuits. Apart from reduced costs, SD-WAN can offer good security, simple cloud management, dynamic reconfiguration and bandwidth scaling as well as deployment of new services as a software download instead of a box replacement.
Unlike older style WAN services where the network intelligence is concentrated back in centralized exchanges under Telco control, SD-WAN distributes much of the “smarts” to the customer premise equipment (CPE) or SD-WAN appliances. These sophisticated devices, while able to load-balance and provide redundancy across multiple network connections, become a single point of failure. A few enlightened SD-WAN vendors are aware of this, but many still believe their devices are invulnerable.
If you’re moving to SD-WAN and have a truly business critical need for maximum network uptime you should consider full redundancy with a pair of SD-WAN CPE devices and separate network connections to each. If you can’t afford or justify a fully redundant system, then please consider a separate out-of-band management device with a cellular connection. Sooner or later it will get you out of trouble!
Q: In what ways can/does network resilience contribute to business growth? And perhaps relatedly, how can organizations align their network operations and management strategy with broader business goals?
A: A highly resilient network contributes to business growth by removing a source of friction. Network downtime or underperformance causes reputational damage with customers, loss of revenue, demotivates your employees and costs time and money to remediate. It’s not a “nice to have” option any more – it’s mandatory if you want to be competitive.
Successful modern businesses must be fast moving and responsive to their customers and the actions of their competitors. Whether they only rely on their network and IT systems for internal needs or build customer-facing products on top of them, they must be reliable and rapidly scalable to support the needs of the business. NetOps is a new approach to networking that can offer these benefits.
Q: Tell us about NetOps and what its longer-term possibilities and promise are?
A: Until recently, networking was characterized by a small number of large vendors, special purpose proprietary hardware and software, arcane command line interfaces and manual configuration of individual devices. One of my colleagues likens this recent past to the mainframe era of computing, and it is a model that has failed to adapt well to Internet scale operations.
NetOps is a broad and evolving term which currently defies concise definition, but important aspects include automation, declarative models, open APIs instead of proprietary CLIs, common standards for configuration and telemetry, and the virtualization of network functions. Network engineers no longer connect to a device and type configuration commands – they describe the desired state of the network in a high-level configuration language which is then translated automatically into API calls. New hardware devices are simply plugged in and automatically configure themselves and new virtual network functions are deployed across the network in real time without the need for human intervention.
The main driver behind the NetOps movement is the ability to build networks that can scale rapidly, deal with highly variable demands and spin up and tear down new services rapidly, but it promises a slew of other benefits. Operational efficiencies and removal of vendor lock in should reduce costs. Scope for human error is reduced resulting in higher reliability and better security. Software updates and security patches can be tested in canary deployments and rolled out rapidly and consistently (and backed out again just as quickly if problems arise.) Networks could even respond automatically to failures and self-heal.
Q: Opengear recently announced its new NetOps Automation Platform at Cisco Live. Give us an overview of what the platform offers and how it stands out in the market.
A: The Opengear NetOps Automation™ platform leverages our Lighthouse central management software and the physical proximity of our management appliances to network infrastructure to provide a solution for automating NetOps workflows. We provide orchestration capabilities, image and configuration storage, security services, remote communications and the ability to manage and run containerized applications wherever your infrastructure is located, in the datacenter, at the edge or anywhere in between. Standard tools are used where possible, for example Ansible, Git and Docker containers. Opengear will be developing NetOps modules to run on the platform and we expect that customers and third parties will do likewise.
The first NetOps module that Opengear has developed for the platform, Secure Provisioning, provides secure, automated provisioning of remote infrastructure. Opengear OM2000 appliances at remote sites “call home” to Lighthouse at the central location and receive configuration information and firmware for local devices that need to be configured or updated. Zero-touch provisioning services running on the OM2000 appliances then provision these devices. The OM2000 can communicate back to the central location using an out-of-band channel, such as an integrated cellular module, so that provisioning can be performed even before the production network is available or in the presence of communication failures. These appliances also contain a Trusted Platform Module to guarantee that the appliance is secure and that VPN keys and downloaded configurations and software have not been accessed or tampered with.
We are also partnering with LogZilla, who have created the first third party NetOps module for our platform. Their integration provides a custom dashboard for Lighthouse and telemetry sensors that run in the OM2000 appliances and collect and preprocess event information from the managed devices at the remote sites. This provides deep insights across the entire distributed network, even reaching into sites which are disconnected due to network failures.
Many NetOps early adopters are well-resourced enterprises rolling their own solutions. That requires a significant commitment of time and money that not everybody is willing or able to make. The Opengear NetOps Automation platform will support canned modules that are easy to implement and use and act as a base for simplified development of custom modules. Our appliances provide proximity to the critical network infrastructure than needs to be managed, end-to-end security and guaranteed connectivity via cellular (or other out-of-band channels) even during a communications outage.
Q: Who is the new Opengear NetOps Automation Platform built for? What’s the ideal use case scenario?
A: The idea for the NetOps Automation Platform came from learning how our most tech savvy and well-resourced customers were developing automation tools based around our out-of-band management appliances. We saw an opportunity to build a platform and range of NetOps tools for mid-tier and smaller enterprises that don’t have the specialized expertise and resources to build their own. For the folks that do want to roll their own specialized NetOps solutions, the APIs and ability to run containerized applications simplify that process.
The NetOps Automation Platform has the potential to address a range of use cases by leveraging the privileged position that our appliances naturally occupy in the network. We are present on out-of-band and production networks, have proximity to the critical network infrastructure, offer always-up connectivity and resilience and provide hardware level security. We asked some of our forward-thinking customers what kept them awake at night and, despite using different words, they gave us very similar responses. They were worried about “configuration management automation,” “secure provisioning in the wild” and “scalable edge deployments.” That feedback informed the development of the Secure Provisioning module.
An ideal use case for the NetOps Automation Platform and Secure Provisioning module is for what our CTO calls the “Mars Lander” scenario – installing network infrastructure at a new remote location. Traditionally, this requires building the rack of equipment and manually pre-configuring it before shipping it to the remote site, living with the risk that someone will tamper with it en route, and hoping that the remote environment is exactly as expected, and everything will work when it arrives. Alternatively, you can send an expensive network engineer to the site along with the equipment. Neither approach scales very well.
With Secure Provisioning, the Opengear appliance has a simple “call home” configuration and is installed in a rack full of factory default, unconfigured equipment. When it reaches the remote site and is powered up, the Opengear appliance checks that it hasn’t been tampered with and then calls home, using its cellular interface if there is no WAN connection. It downloads configuration information and firmware images for the local managed devices and pushes it to them using Zero touch provisioning. If some unforeseen event occurs, then the Opengear appliance can provide out-of-band access for a network engineer at the central site to diagnose the issue and take corrective action.
Q: The NetOps Automation Platform utilizes a fully open architecture. Why is this approach important? And how else is Opengear showing a commitment towards open source?
A: Even if we didn’t have a natural predisposition to use open source, several of the choices we made gave us the best tools for the job. An open architecture provides vendor independence and is easy to extend using skills that are widely available. Building on top of existing, mature open source packages also gave us a huge amount of leverage, allowing us to concentrate on our value add rather than building basic features and “plumbing.”
I have already talked a bit about our early involvement in open source projects. More recently, we’ve released a couple of Linux drivers for embedded devices (driver for NCT7491 thermal monitor and fan controller, watchdog driver for NCT5104D SuperIO chip.) We recently started using Petitboot in our latest appliances and have been actively contributing back to that project as well. We encourage our engineers to participate in open source projects and our foray into NetOps will widen the range of projects that we contribute to.
Q: What other trends and technology strategies do you see on the rise in enterprise networking (and are there any that you expect to see dying out in the near future)? How is Opengear meeting these evolving market demands?
A: One trend in networking which is now well established is the commoditization of network hardware and the transition of value to software. The movement to white boxes for expensive core switching and routing devices is now extending to remote offices with generic uCPE hardware, network function virtualization and SD-WAN software stacks. We believe we’re well positioned to benefit from this trend as remote infrastructure becomes more complex and software based. The need for out-of-band management doesn’t go away. There’s still hardware that can fail, and the flexibility of modular software comes with the need for more frequent updates and patches and a combinatorial explosion that has the potential for unexpected interactions between components.
Another direction in networking is the move away from command line and graphical user interfaces towards programmatic configuration and better support for automation. Our management console, Lighthouse, and the latest generation of Opengear appliance software both embrace this change and provide a RESTful API which is the underlying mechanism for all configuration and management operations.
The final trend that I will mention is Edge Computing, also known as Fog Computing. With the rise of IoT, video everywhere, self-driving vehicles, etc. there has been a growing need to preprocess data remotely, both to limit latency for time-critical operations and to reduce the amount of data flowing upstream for centralized processing. Our appliances have been used at the Edge before the term even existed and with the recent addition of support for Docker containers in our new devices it is easier than ever to deploy edge computing applications. The LogZilla NetOps module is a great example, offering the ability to compact and enrich large volumes of event data so that it can be transmitted quickly and cost-effectively over an out-of-band cellular link.
About Opengear
Opengear delivers secure, resilient access and automation to critical IT infrastructure, even when the network is down. Provisioning, orchestration and remote management of network devices, through innovative software and appliances, enables technical staff to manage their data centers and remote network locations reliably and efficiently. Opengear’s business continuity solutions are trusted by global organizations across financial, digital communications, retail and manufacturing industries. The company is headquartered in New Jersey, with R&D centers in Silicon Valley and Brisbane, Australia.