How open source powers innovation

If you're looking for the next big thing in computing technology, start the search in open source communities.
3 readers like this.
Science lab with beakers

Where do people come together to make cutting-edge invention and innovation happen?

The corporate lab

One possible answer is the corporate research lab. More long-term focused than most company product development efforts, corporate labs have a long history, going back to Thomas Edison's Menlo Park laboratory in New Jersey. Perhaps most famous is Bell Labs' invention of the transistor—although software folks may associate it more with Unix and the C programming language.

But corporate laboratories have tended to be more associated with dominant firms that could afford to let large staffs work on very forward-looking and speculative research projects. After all, Bell Labs was born of the AT&T telephone monopoly. Corporate labs also aren't known for playing with their counterparts elsewhere in industry. Even if their focus is long-term, they're looking to profit from their IP eventually, which also means that their research is often rooted in technologies commercially relevant to their business.

A long-term focus is also hard to maintain if a company becomes less dominant or less profitable. It's a common pattern that, over time, research and development in these labs start to look like an extension of the more near-term focused product development process. Historically, the focus of corporate labs has been to develop working artifacts and benchmark themselves against others in the industry by the size of their patent portfolio—although recently there has been more publishing of results than in years past.

The academy

Another engine of innovation is the modern research university. In the US, at least, the university as a research institution primarily emerged in the late 19th century, although some such schools had colonial-era roots, and the university research model truly accelerated after World War II.

Academia is both collaborative and siloed: collaborative in that professors will often collaborate with colleagues around the globe, siloed in that even colleagues at the same institution may not collaborate much if they're not in the same specialty. Although IP may not be protected as vigorously in a university setting as a corporate one, it can still be a consideration. The most prominent research universities make large sums of money from IP licensing.

The primary output of the academy is journal papers. This focus on publication, sometimes favoring quantity over quality, comes with a famous phrase attached: publish or perish. Furthermore, while the content of papers is far more about novel results than commercial potential, that has a flip side: research can end up being quite divorced from real-world concerns and use cases. Among other consequences, this is often not ideal for the students working on that research if they move on to industry after graduation.

Open source software

What of open source software? Certainly, major projects are highly collaborative. Open source software also supports the kind of knowledge diffusion that, throughout history, has enabled the spread of at least incremental advances in everything from viticulture to blast furnace design in 19th-century England. That said, open source software, historically, had a reputation primarily for being good enough and cheaper than proprietary software. That's changed significantly, especially in areas like working with large volumes of data and the whole cloud-native ecosystem. This development probably represents how collaboration has trumped a tendency towards incrementalism in many cases. IP concerns are primarily handled in open source software—occasional patent and license incompatibility issues notwithstanding.

The open source model has been less successful outside the software space. There are exceptions. There have been some wins in data, such as open government datasets and projects like OpenStreetMap—although data associated with many commercial machine learning projects, for example, is a closely guarded secret. The open instruction set architecture specification, RISC-V, is another developing success story. Taking a different approach from earlier open hardware projects, RISC-V seems to be succeeding where past projects did not.

Open source software is most focused on shipping code, of course. However, associated artifacts such as documentation and validated patterns for deploying code through GitOps processes are increasingly recognized as important.

The question

This raises an important question: How do you take what is good about these patterns for creating innovation? Specifically, how do you apply open source principles and practices as appropriate? That's what we've sought to accomplish with Red Hat Research.

Red Hat Research

Work towards what came to be Red Hat Research began in Red Hat's Brno office in the Czech Republic in the early 2010s. In 2018, the research program added a major academic collaboration with Boston University: the Red Hat Collaboratory. The goal was to advance research in emerging technologies in areas of joint interest, such as operating systems and hybrid cloud. The scope of projects that Red Hat and its academic partners collaborate on has since expanded considerably, although infrastructure remains a major focus.

The Collaboratory sponsors research projects led by collaborative teams of BU faculty and Red Hat engineers. It also supports fellowships and internship programs for students and organizes joint talks and workshops.

In addition to activities like those associated with the Collaboratory, Red Hat Research now publishes a quarterly magazine (sign up for your free print or PDF subscription!), runs Red Hat Research Days events, and has regional Research Interest Groups (RIGs) open to a wide range of participants. Red Hat engineers also teach classes and work with students and faculty to produce prototypes, demos, and jointly authored research papers as well as code.

What sorts of open source research projects?

Red Hat research participates in research projects with universities around the world. These are just a few recent examples from North American partnerships.

  • Machine learning for cloud ops: Continuous Integration/Continuous Development (CI/CD) environments move at a breakneck pace using a wide variety of components. Relying on human experts to manage these processes is unreliable, costly, and often not scalable. The AI for Cloud Ops project, housed at the BU Collaboratory, aims to provide AI-driven analytics and heavily automated "ops" functionality to improve performance, resilience, and security in the cloud without incurring high operation costs. The project's principal investigator, BU professor Ayse Coskun, was interviewed about the project for a recent Red Hat Research Quarterly.
     
  • Rust memory safety: Rust is a relatively new language that aims to be suitable for low-level programming tasks while dealing with the significant lack of memory safety features that a language like C suffers from. The problem: Rust has an "unsafe" keyword that suspends some of the compiler's memory checks within a specified code block. There are often good reasons to use this keyword, but it's now up to the developer to ensure their code is memory safe, which sometimes does not work out so well. Researchers at Columbia University and Red Hat engineers are exploring methods for the automated detection of vulnerabilities in Rust, which can then be used to help automate the development, testing, and debugging of real-world software.
     
  • Linux-based unikernel: A unikernel is a single bootable image consisting of user code linked with additional components that provide kernel-level functionality, such as opening files. The resulting program can boot and run on its own as a single process, in a single address space, at an elevated privilege level without a conventional operating system. This fusion of application and kernel components is very lightweight and can have performance and security advantages. A considerable team of BU and Red Hat engineers has been working on adding unikernel capabilities into the same source code tree as regular Linux.
     
  • Image provenance analysis: It's increasingly easy to create composite or otherwise altered images to spread malicious, untrue information intended to influence behavior. A research collaboration among the University of Notre Dame, Loyola University, and Red Hat is using graph clustering and other techniques to develop scalable mechanisms for detecting these images. Among the project's goals is to see whether there's also the potential to look at metadata or other sources of information. The upstream pyIFD project has come out of this research.

Only the beginning

The above is just a small sample of the many innovative research projects Red Hat Research is involved with. I encourage you to head over to the Red Hat Research projects page to see all the other exciting work going on.


This article is adapted from a post on the Red Hat Research blog and is republished with permission.

Tags
User profile image.
Gordon Haff is Red Hat technology evangelist, is a frequent and highly acclaimed speaker at customer and industry events, and is focused on areas including Red Hat Research, open source adoption, and emerging technology areas broadly.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.