THE LINUX FOUNDATION PROJECTS
All Posts By

dgochev

Automate your OSPO via Open Source Collaboration

By Featured, News

At a recent session of OpenChain & Friends 2026, the standard slide deck was replaced by a whiteboard and a candid, community-driven discussion. The goal? To map out how an Open Source Program Office (OSPO) moves from manual chaos to automated efficiency.

1. The Foundation: Policy and Configuration

The group reached a rapid consensus: Policy is the “North Star.” Every automation effort must stem from a clear policy. However, participants emphasized that automation isn’t a “set it and forget it” tool. It requires proper configuration to yield meaningful results; otherwise, you are simply automating the generation of “noise.”

2. The Carrot vs. The Stick

The discussion split OSPO responsibilities into two clear tracks:

  • The Carrot (Value/Contribution): Automation here focuses on lowering the barrier for Open Source and InnerSource contributions. By streamlining the “give back” process, companies unlock developer productivity and innovation.

  • The Stick (Compliance/Cost): This is the defensive play. Key components identified for automation include maintaining a List of Approved FOSS, tracking all components, and utilizing both static and dynamic detection for license and security (best effort) compliance.

3. Solving the Supplier & Legal Bottleneck

A major takeaway involved the supply chain. Supplier compliance is non-negotiable, but how do we get them there?

  • Peer-to-Peer Convincing: If a supplier is stuck using outdated methods (like manual snippet scanning), the most effective solution isn’t a stern email—it’s a connection. Introducing them to another OSPO with a successful automated setup provides the social proof needed to change their workflow.

  • External Legal Intelligence: For those without a dedicated legal team, the room recommended leveraging industry-standard resources like the OSADL License Checklists or the ScanCode database to verify license requirements.

4. The Power of Upstream and Community

The final, and perhaps most vital, point was about the human element behind the automation.

  • Fix it Upstream: When you find a bug or a compliance issue, fix it in the actual project. Upstreaming doesn’t just help the community; it saves your team the effort of maintaining a private fork forever.

  • Talk to the Experts: If you are stuck, don’t hire a consultant who doesn’t understand the “flow.” Reach out to the community. The best advice comes from those who are actively part of the ecosystem and understand the nuances of the projects you use.

 

Efficient FOSS Compliance: The Power of Community Curation and FOSSology

By Featured, News

At the Open Chain and Friends event this March, one session stood out for its immediate practical value. Divided into two parts, the presentation moved from the “Why” of community curation to the “How” of technical implementation.

Following the Chatham House Rule, here is a simplified breakdown of the most practical session of the day.

Part 1: The Community Approach (OSSelot)

The first half of the session addressed a common headache: every company spends hours scanning the same open-source packages (like curl or bash) independently. This is a massive waste of resources.

The solution presented is OSSelot—a public curation database. Instead of starting from scratch, you can download pre-cleared compliance data.

  • What you get: Curated SPDX reports, license texts, and copyright notices that have already been reviewed by experts.

  • The Goal: To drastically reduce the time needed to clear a software package by reusing existing work.

Part 2: Putting it into Practice (FOSSology)

The second half, led by a deep dive into FOSSology, showed exactly how to automate this workflow. The beauty of this approach is in how it handles version updates.

The 3-Step Workflow:

  1. Baseline Upload: You upload the “official” version of a package from OSSelot into FOSSology (often via a simple API call or URL upload).

  2. Import Curated Data: Since the OSSelot data is already “cleared,” FOSSology absorbs this information instantly.

  3. The “Delta” Scan: When you need to check a new version of that software, you run a scan and tell FOSSology to reuse the results from the OSSelot baseline.

Why this is a game-changer: FOSSology will automatically match the files that haven’t changed. You only have to manually review the new or modified files.

Final Thoughts

This was very practical and most interesting session for me at that day. It transformed the daunting task of license compliance into a manageable, collaborative process. By using community-curated data and the “Reuse” features of FOSSology, we can stop reinventing the wheel and focus only on what has actually changed in our code.

It’s a perfect example of how sharing creates value for everyone in the open-source ecosystem.

 

 

 

Surviving the AI Slopageddon: Is Open Source Breaking?

By Featured, News

The Problem: From “Bricks” to “Concrete Walls”

Traditionally, Open Source was built like a brick house: humans shared small patches of code, talked to each other, and built a community.

Today, we are facing the “Concrete Wall Drop.” AI can generate entire modules in seconds. Instead of humans collaborating, we have AI agents “dropping” massive amounts of code into projects. This is what experts call AI Slop—code that looks professional and has great documentation, but is often messy, redundant, or plain wrong inside.

The Reviewer’s Nightmare

The biggest issue is that writing code is now infinite, but checking it is not. * The Bottleneck: AI can create 1,000 lines of code instantly, but a human still needs hours to make sure it doesn’t have security holes.

  • The Shift: The hard work has moved from the writer to the reviewer. Maintainers are getting exhausted trying to spot “hallucinations” hidden behind neat-looking AI formatting.

Why the System is Shaking

Open Source used to work because of visibility. You used a tool, talked to the creator, and maybe donated or hired them.

Now, AI agents act as middlemen. A user asks an AI for an app, the AI grabs the code, and the user never even sees the human who actually maintains it. This makes the developer’s work invisible. If the people building the foundations of our software aren’t seen or supported, they might just stop building.

What’s Next?

We are moving into an “AI-native” world. To survive the Slopageddon, the community needs to find new ways to:

  1. Spot the “Slop”: Filter out low-quality AI code automatically.

  2. Protect Humans: Make sure the people behind the code are still visible and supported.

  3. Redefine Trust: We can’t trust code just because it “looks” right anymore.

The bottom line: AI can write code, but it can’t take responsibility for it. Keeping humans in the loop is the only way to save Open Source.

 

Training-as-Code: A New Era for Open Source Literacy

By News

At a recent gathering of open-source compliance and education experts, a transformative approach to corporate learning was presented: Eclipse OSILK (Open Source & InnerSource Learning Kit). The presenter highlighted how the industry is moving away from static, hard-to-maintain training decks and toward a developer-centric “as-code” model.

The Problem: The “Maintenance Trap”

Organizations today face a significant challenge in scaling open-source literacy. While training materials exist, they often suffer from:

  • Poor Reusability: Rigid formats (like PDFs or complex PowerPoints) make it difficult to extract and repurpose content.

  • Customization Barriers: It is hard to adapt generic open-source training to an organization’s specific internal policies.

  • Stagnation: Once created, these materials are difficult to maintain, quickly becoming outdated as technologies and licenses evolve.

The Solution: Eclipse OSILK

The core philosophy of the session was simple: Treat training material like software source code. By using AsciiDoc—a lightweight markup language—training content becomes text-based, modular, and version-controlled. This “Training-as-Code” approach offers several key advantages:

  • Collaborative by Nature: Using tools like Git, multiple contributors can track changes, manage releases, and accept community contributions via pull requests.

  • Single Source, Multiple Outputs: A single text file can be rendered into various formats (slides, web pages, or handbooks) using tools like Antora or Pandoc.

  • Modular Content: Content is broken down into small, reusable snippets. As shown in the “Modular Content Structure” diagram, specific modules (e.g., JS, PHP, or C# specifics) can be pulled into different “Courses” as needed.

  • Forkable: Just like software, an organization can “fork” the OSILK base materials and add their own internal compliance layers without breaking the link to the original source.

Roadmap and Real-World Use

The initiative, currently in the Eclipse Incubation phase, is already seeing adoption by major players like Michelin and various engineering schools.

The future roadmap for OSILK focuses on expanding content—moving from basic awareness to deep dives into consumption, contribution, and launching open-source projects. There is also a strong push toward automation and localized translations to make open-source literacy accessible on a global scale.


Key Takeaway

The consensus among participants was that scaling education requires the same agility we apply to software. By adopting the Training-as-Code mindset, organizations can finally move past the “static slide” era and build a living, breathing knowledge base for the open-source community.

 

 

Stream Introduction and FOSS license scanning: The why, the how and the community approach

By Featured

FOSSology: Open Source License Compliance

A recent presentation introduced FOSSology, a key tool for managing open source license compliance. The session covered its core functions, workflow, license identification, copyright handling, and reporting capabilities.

What is FOSSology?

FOSSology is a powerful, open-source (GPL-2.0 licensed) framework for managing open source compliance. It helps with:

  • License Management: Creating, modifying, and assigning risk levels and compatibility rules to licenses.
  • Obligation Management: Defining and linking obligations to licenses.
  • Acknowledgment Storage: Storing necessary acknowledgments.

While powerful, efficient use often requires training. Installation on Linux from source is straightforward. A key takeaway is that full license analysis cannot be entirely automated today.

FOSSology uses various agents for license identification (Nomos, Monk, Ojo, Scancode) and copyright processing (FOSSology agent, Scancode). It also includes agents for keyword search, IP, and ECC.

Overall Workflow

The typical FOSSology workflow involves:

  1. Creating a folder (if needed).
  2. Uploading the component (reusing phrases is possible).
  3. Running license analysis.
  4. Processing copyrights.
  5. Performing ECC checks.
  6. Editing and configuring settings.
  7. Downloading and reviewing reports.

Candidates and obligations can be added during license analysis. When uploading, users can ignore pre-configured folders (e.g., “tests,” “.github,” “examples”) to streamline the analysis. Options exist for automatic license conclusions, reusing data from past packages, and deactivating copyrights.

License Identification

Identifying the correct license is critical. FOSSology offers:

  • Multiple Scanners: Nomos, Monk, Ojo, Scancode.
  • Text Highlighting: For quickly spotting changes in license text.
  • Matched License Overview: Provides immediate insights without a full package analysis.

Differences between scanner findings and final conclusions are common. The tool supports manual file-by-file inspection, bulk identification via reports, and folder-level license assignment. It also handles individual license texts, acknowledgments, and comments.

For unknown licenses, manual searching might be needed. FOSSology allows adding comments to document the steps taken for a license conclusion, providing an audit trail in reports and SPDX tags. It supports Unicode in license texts.

License Compatibility: FOSSology allows defining custom compatibility rules or importing existing sets (e.g., from OSADL).

Copyright Statements

FOSSology extracts copyright statements using regular expressions. These often require post-processing to remove clutter, with two views available: file view and folder/upload view.

Reporting

FOSSology offers extensive reporting features at both component and folder levels.

Component Level Reports: DEPS files, ReadMe_OSS, SPDX V2/V3, CycloneDX, Unified Report, License List, Copyright List.

Folder Level Reports: ReadMe_OSS, SPDX V2/V3, CycloneDX.

A notable feature is “enable OSSElot export,” which generates valid SPDX files and well-formatted ReadMe_OSS, addressing the issue where SPDX specifications do not include full license texts in valid SPDX files, thus avoiding compliance pitfalls.

SPDX files generated by FOSSology adhere to SPDX-2.3, including checksums, license conclusions, comments, copyright info, and scanner findings.

OSSelot and FOSSology: Streamlining Open Source Compliance

A recent presentation highlighted OSSelot, an Open Source Curation Database, and its integration with FOSSology to simplify open source license compliance. The core message was that while some manual review remains essential, reusing curated licensing and copyright information can drastically cut the time needed to clear software packages.

What is OSSelot?

OSSelot is a public database offering curated compliance data for commonly used FOSS components and associated tools. It stores:

  • License and Copyright Analysis: Results from thorough analysis.
  • Metadata: Information like download location, package creators, reviews, and comments (often in README or info.json).
  • Standard Reports: SPDX (Tag:Value, JSON, YAML, RDF) reports with concluded licenses and copyright notices.
  • Disclosure Documents: Aggregated license texts, copyright notices, and acknowledgments per package.

How to use OSSelot Data with FOSSology

OSSelot data significantly streamlines compliance by allowing users to leverage pre-analyzed information. This can be done in two main ways:

1. Manual Workflow (via GUI):

  • Find closest version: Locate the nearest version of the package in OSSelot.
  • Upload and Reuse: Upload its source code to FOSSology without scanning, but use the “Reuse” function, referencing the OSSelot package. This automatically clears the package.
  • Upload required version: Upload the actual required version of the package. Run scanners, and then reuse the results from the previously cleared OSSelot package.
  • Manual clear: Manually clear any remaining (new or modified) files.

2. Automated Workflow (via FOSSology REST API):

This method offers greater automation, especially for large-scale operations.

  • Discover OSSelot versions: Use the REST API to find available OSSelot package versions.
  • Upload OSSelot source: Upload the source code of the relevant OSSelot package into FOSSology (e.g., using a curl command with the package URL). Crucially, this is done without scanning.
  • Trigger OSSelot Import: This step automatically clears the entire package within FOSSology based on the OSSelot data.
  • Upload required source: Upload the source code of the actual version you need to clear. Run scanners and reuse the data from the now-cleared OSSelot package.
  • Manual clear: Handle only the remaining (new or modified) files manually.

Key Benefits

The integration of OSSelot with FOSSology allows for:

  • Significant Time Savings: Reusing curated data drastically reduces the manual effort required for clearing packages.
  • Increased Accuracy: Leveraging expert-curated data improves the reliability of compliance conclusions.
  • Scalability: The API-driven approach enables automation for managing compliance across many components.

This synergy between OSSelot’s curated data and FOSSology’s powerful analysis capabilities presents a highly efficient solution for modern open source license compliance challenges.

 

Using Apache Airflow to Automate Autonomous Driving Tests

By Featured

This presentation, “Using Apache Airflow to Automate Autonomous Driving Tests” by Bosch, details the significant challenges of testing software for autonomous vehicles and how Apache Airflow provides a robust solution.

The core problem lies in the sheer scale of testing required: to statistically prove that autonomous vehicles are safer than human drivers (e.g., 20% lower fatality rate with 95% confidence), an astronomical 14 billion kilometers of testing is needed. Physical testing alone would take 400 years with a fleet of 100 vehicles operating non-stop, making it impractical and statistically impossible for ensuring safety. Moreover, the “chaos of reality” (as illustrated by a chaotic street scene) demands testing across an immense number of complex scenarios. Standard CI/CD tools fall short here, as they are designed for short-lived code builds, not the massive test volumes, dynamic workflows, complex dependencies, and specialized hardware environments inherent to autonomous driving development.

Bosch, in collaboration with Cariad through the “Automated Driving Alliance,” adopted Apache Airflow as their orchestrator to manage thousands of parallel test executions. Airflow was chosen for its large community, Python-based workflow-as-code approach, enterprise-readiness, scalability, vendor/tech neutrality (Kubernetes, Spark/Hadoop, Docker), and Apache license, avoiding vendor lock-in. They even leverage Airflow for “Edge Worker” deployments to manage testing on remote sites with specialized hardware.

Key lessons learned include the importance of building on mature products rather than developing in-house solutions, leveraging the community for support, and prioritizing upstream contributions to minimize custom code. Bosch actively contributes to Airflow, helping shape its development in a direction that meets their critical safety needs.

Indeed, Bosch has been an extremely active contributor to the Apache Airflow project, making over 900 contributions, including new features, bug fixes, and improvements. They are deeply involved in the Airflow community through conferences, podcasts, and discussions, demonstrating a strong commitment to open-source collaboration and development. This extensive contribution highlights how they not only use Airflow but also actively help evolve it to meet the demanding requirements of autonomous driving test automation.

 

 

AI Systems Engineering: The New Discipline to Rescue AI from the “Valley of Death”

By Featured

AI is everywhere, yet true, reliable AI innovation often feels out of reach. With only 9% of organizations achieving AI maturity (Gartner 2024) and 95% of GenAI projects expected to fail (MIT 2025), it’s clear: AI needs a disciplined approach to move from hype to real-world impact.

Dr. Thomas Usländer from Fraunhofer IOSB highlighted a critical solution at OpenChain and Friends 2026: AI Systems Engineering.

Why You Need AI Systems Engineering

Simply put, AI only becomes an innovation when it’s reliably, securely, and efficiently applied. We’re currently in the “Trough of Disillusionment” on the Gartner Hype Cycle for AI – where initial excitement fades as projects hit roadblocks. AI Systems Engineering is our map out of this trough.

It’s about treating AI not as magic, but as complex systems that need proper engineering.

What Is It? (The Core Idea)

AI Systems Engineering is a new discipline focused on:

  1. Methodology: Structured ways to build and deploy AI. Think of PAISE® (Process Model for AI Systems Engineering) – it even treats data as “sub-systems” with their own development cycles.
  2. Data Management (Data Spaces): AI needs data! Open, secure data-sharing platforms like Catena-X are crucial for industrial AI to scale and work together.
  3. Responsible AI: With regulations like the European AI Act, building AI responsibly (considering roles, risks, and ethics) isn’t optional – it’s integrated into the engineering process.
  4. System-Wide View: AI isn’t just an algorithm; it’s part of a larger system. This discipline ensures AI integrates smoothly and safely into broader operations.

AI Systems Engineering + Data Spaces: The Perfect Pair

These two concepts are inseparable. AI Systems Engineering gives you the “how-to” (the engineering process), while Data Spaces provide the “what-to-use” (the secure, shared data). Together, they enable the efficient development, deployment, and operation of AI systems, especially for industrial uses.

The Bottom Line

AI is powerful, but its true value is unlocked through discipline. AI Systems Engineering is crucial for making AI reliable, compliant, and genuinely innovative. Without it, many AI projects risk getting stuck in the “Valley of Death.” It’s the engineering foundation AI needs to thrive.

 

 

The Last Mile Problem: Turning Executive Support into Real Open Repo Contributions

By Featured

The following information was explain in this event:

  • What is AGL? Automotive Grade Linux is a non-profit, open-source Linux-based collaborative project hosted at the Linux Foundation. Its goal is to build the car of the future through rapid innovation by uniting the automotive and software industries. It covers areas like infotainment, instrument clusters, Head-up Displays (HUD), telematics/connectivity, functional safety, and Advanced Driver Assistance Systems (ADAS).
  • AGL at a Glance: It’s a Linux Foundation collaborative project with members including automakers, Tier 1 suppliers, and technology companies. It started in 2015 as the Unified Code Base (UCB) – an open platform for Software-Defined Vehicles (SDVs). It has been in production in Toyota and Lexus vehicles since 2018, with a refresh in 2026, and will be integrated into Subaru, Mazda, and Mercedes-Benz Vans. Its scope has expanded from infotainment to instrument clusters, telematics, ADAS, and beyond.
  • 10+ Years of Tier 1/OEM Collaboration: AGL has proven that competitors can collaborate on shared software. It provides a neutral ground where OEMS and Tier 1s work side-by-side on a common platform. Code contributions come from automotive companies such as Toyota, Honda, Panasonic, Aisin, Denso, Jaguar Land Rover, Denso Ten, Mitsubishi, Daimler, and Subaru. This shared investment reduces duplication and accelerates innovation, resulting in production-ready open-source software in millions of vehicles.
  • “The Last Mile Problem”: Lack of Senior Management Buy-In: The primary organizational barrier to open-source contribution is that leadership often fails to see the business value of contributing. Open source is not perceived as a strategic asset, and there are concerns about competitive advantage and intellectual property leakage. Without executive sponsorship, Open Source Program Offices (OSPOs) and contribution efforts stall, preventing developers from posting code to open repositories.
  • AGL OSPO Expert Group: Launched in November 2024, this group is led by Toyota, with key members including Panasonic and Honda. Its objectives are to encourage companies to establish their own OSPOs, share pain points and collaborate on solutions, develop best practices for open source in the automotive industry, and address business restrictions (e.g., export control, anti-trust). The group meets monthly and is open to everyone which shows that everyone is welcome to participate and to support the team.
  • AGL OSPO Expert Group – Executive Support: The group recognized the need for open-source sponsorship at the highest levels within companies. They created an “Executive Slide Deck,” available for anyone to use, to promote the value and usage of open source to executives. This deck includes case studies from Honda, Toyota, Bosch, and an unnamed Tier One supplier.
  • Deployment Example: Suzuki e Vitara: New Suzuki EVs feature AGL and QT, developed by Aisin and Yazaki.

As summary I would take this as one good example how many different car manufacture can use one base and how everyone can participate in order to make this project better and better.

 

 

Open Source based SupplyChain Management at scale

By Featured

In this lecture I was able to understand what the Open Source Tooling Group’s mission is – to simplify and standardize how companies manage open source software compliance throughout their development and supply chains. The core challenge which was addressed is the difficulty in truly knowing if various compliance and security tools are working correctly, integrating smoothly, and consistently producing reliable data like Software Bill of Materials (SBOMs). Traditional “plugfests” or superficial comparisons often don’t provide the deep insights needed.

At the heart of recommended tooling solution in this lecture is the Open Review Toolkit (ORT). This isn’t just a single-purpose tool; it’s designed as a comprehensive “virtual conveyor belt” for open source compliance. ORT can automatically analyze a software project’s dependencies, download its source code, scan it for license and copyright information (often using tools like ScanCode), consult vulnerability databases (like VulnerableCode) for security risks, evaluate all these findings against an organization’s specific policies, and then generate detailed reports, including SBOMs in industry-standard formats like SPDX and CycloneDX. It acts as an orchestrator, integrating various specialized open-source tools into a cohesive workflow.

A major advantage and a key differentiator highlighted by the OpenChain project is ORT’s robust and readily available testing infrastructure.

Currently ORT-Server have OCCTET Test Instance. This instance allows companies to easily create and run full, end-to-end simulations of their entire software supply chain. The most effective way to test is by taking an identical “dummy repository”—which are publicly available online, designed to be more complex than a simple “Hello World” and contain realistic dependencies—and running it through various compliance tools. By processing the same dummy repository through ORT’s full pipeline, users can then compare the results generated by different tools, verify ORT’s accuracy, and confirm that their entire compliance workflow is functioning as expected. This allows for clear benchmarking, showcasing, and collaborative testing of compliance processes.

You can also manage the output of ORT and show results in a tools like Grafana which can be very helpful for the management so they can easily identify when some red flag is shown on their platform.

 

 

 

 

AGL Assessment Automation – Overview and Insights

By Featured

The discussion revolved around how to effectively manage Software Bill of Materials (SBOMs) in the automotive and embedded software industries, which are complex and critical. The core challenge is that without automated SBOMs, managing risks across many software parts is extremely difficult, especially given regulatory requirements and complex supply chains.

A key focus was on leveraging existing open-source tools and frameworks to streamline this process. Automotive Grade Linux (AGL), a non-profit open-source Linux project for automotive systems, was highlighted as a strong starting point. By combining AGL with Yocto (a build toolchain), the presenters proposed a robust foundation for embedded SBOM operations. During the session it was also mentioned that no certification is required for AGL even for production used which is huge advantage.

The main idea was to build an automated system for assessing SBOMs, called AGL Assessment Automation (AAA). Its purpose is to create a reference system and share best practices for SBOMs in these industries. This involves:

  • Validating policy-based assessment automation within the AGL Continuous Integration (CI) system.
  • Adopting cybersecurity best practices from organizations like OpenSSF and CNCF, covering aspects like SBOM lifecycles and SLSA (Supply-chain Levels for Software Artifacts).
  • Targeting SPDX 3.0 JSON as the preferred SBOM format (which is Yocto-compatible).
  • Collaborating with various open-source communities like Yocto, OpenSSF, OpenChain, and SPDX.
  • Utilizing open-source, reusable toolchains.

The presentation showed a best-case implementation flow for SBOMs, involving steps like generating, verifying, analyzing, enriching, and sharing SBOMs, with a continuous focus on risk management and vulnerability monitoring. A crucial pipeline example demonstrated building an SBOM from AGL, verifying it, analyzing risks, enriching it with more data, attesting its validity, and finally publishing it.

For risk analysis, the proposed system would normalize SBOM data from various sources (like Yocto environments generating SPDX 2 or 3) and then use a policy engine, such as OPA (Open Policy Agent), to perform policy-based risk analysis against defined policies.

Initial proof-of-concept work showed promising results, particularly in validating Yocto SPDX 3 SBOMs generated from AGL. While tools for SPDX 3.0 validation are still emerging, a simple validator was implemented as a proof-of-concept.

Looking ahead, the next steps include exploring different policy engines like OPA or OSS Review Toolkit (ORT), enhancing CVE/VEX operations within the Yocto ecosystem, and further integrating SLSA for improved software supply chain security.