Join us at ETCISO Secufest Booth for innovative security solutions
read more
Book a Demo
Book a Demo

The Reserve Bank of India (RBI) has issued Master Directions on cyber resilience and digital payment system controls  emphasizing a "Secure by Design" approach under application security for digital payment systems. This directive underscores the growing importance of robust security measures in India's fast-growing digital payments landscape. It also marks a significant shift toward integrating security at every stage of the software development lifecycle (SDLC). This isn't merely a compliance checkbox; it's a fundamental necessity in today's threat landscape.  This blog explores the technical complexities of implementing the framework, addressing key challenges, and presenting CleanStart as a robust solution. 

“CleanStart, a repository of secure, hardened, and compliant container images and packages, to streamline development and bolster application security.” 

Deconstructing "Secure by Design" 

"Secure by Design" is more than just a concept, it’s a fundamental approach to software development. It emphasizes integrating security at every stage of the Software Development Life Cycle (SDLC), from initial planning to deployment and ongoing maintenance. This approach includes: 

Challenges in Implementing Secure by Design 

Open source has become a dominant force in modern application development, with a vast majority of software projects leveraging open-source libraries, frameworks, and tools. Popular frameworks like React, TensorFlow, and Kubernetes have revolutionized web development, machine learning, and cloud computing, respectively. 

While open-source components offer numerous advantages, they introduce a significant security challenge.  Most modern applications, including digital payment systems, rely heavily on open-source components and containerized environments. While this accelerates development, it also introduces critical security risks: 

Use of untrusted or malicious container images, increasing the risk of supply chain attacks. Maintaining an up-to-date inventory and patching these vulnerabilities can be a significant undertaking.  Not to mention the complexity of modern applications, lack of resources and expertise, time-to-market pressure (to consistently release new features and functionalities), and more. 

The Secure by Design mandate requires that all digital payment applications proactively mitigate these risks. Traditional security solutions like run-time scanning and patch management, while essential, do not address inherent risks in the software supply chain itself. 

Here's what sets CleanStart apart- CleanStart is a trusted repository of vulnerability free and hardened container images and packages, ensuring that every component used in payment applications starts on secure foundation, aligning with the “secure by design” principle.  

Key Benefits for Payment Companies: 

Here’s how it helps organizations comply with RBI’s Secure by Design guidelines: 

Key technical advantages: 

Conclusion: Building a Secure Future for Digital Payments 

The RBI's "Secure by Design" framework is not just a regulatory hurdle; it's an opportunity to build a more secure and resilient digital payments ecosystem.   

CleanStart empowers development teams to embrace this framework effectively, providing a secure foundation for building next-generation digital payment applications.   
By leveraging CleanStart's curated and hardened images, organizations can significantly reduce their security risk, streamline development processes, and focus on innovation.  

In the complex world of digital payments, security is not a an afterthought; it's a fundamental requirement baked into the design of it, and CleanStart provides the tools to achieve it! 

After engaging with hundreds of CISOs worldwide, it has become evident that the role of the CISO is undergoing a significant transformation. As organizations increasingly evolve into technology-centric entities, the traditional network-focused security approach is no longer adequate. 

The Shifting Security Paradigm 

A recurring theme in these discussions is the shift from merely protecting networks to securing applications, which are now the lifeblood of businesses. This change is both profound and irreversible. The focus is moving from network security to application security, emphasizing that if applications are robust and self-protected, many cyber-attacks can be thwarted. 

Gone are the days when CISOs could concentrate solely on perimeter defense and network security. Today's CISOs must adopt a product leader mindset, recognizing that security is not an add-on layer, but an integral component baked into the development process from the start. 

The Product Security Mindset 

As one CISO from a major bank shared, "Five years ago, I spent 80% of my time on network security. Today, 70% of my focus is on application security, product security, and supply chain integrity." This shift is not an isolated case but a new norm. 

Modern CISOs need to: 

Why This Evolution Matters 

The stakes have never been higher. A vulnerability in an application is not just a security issue; it is a business risk that can directly impact millions of customers. Consider these facts: 

Security by Design: The New Imperative 

Forward-thinking CISOs are embracing "security by design" as their guiding principle. This approach involves: 

Early Engagement: Participating in initial product planning, influencing architecture decisions, and setting security requirements upfront. 

Supply Chain Oversight: Demanding vendor transparency, reviewing security practices, assessing SBOM completeness, and monitoring vulnerability management. 

Continuous Validation: Regular security assessments, automated testing integration, continuous monitoring, and real-time threat analysis. 

The Vendor Responsibility Shift 

An emerging trend is the shift of security gap management from customers to vendors. CISOs are demanding more from their technology providers, including: 

CleanStart by Triam: Embodying the New Paradigm 

This evolution in CISO thinking is precisely why we developed CleanStart. We recognized that: 

CleanStart offers: 

The Path Forward 

For CISOs looking to evolve into this new role, consider the following steps: 

Shift Left: Engage earlier in the product lifecycle, build security into the design phase, and automate security controls. 

Build Bridges: Partner with development teams, engage with product managers, and collaborate with vendors. 

Think Product: Understand user experiences, consider security usability, and balance risk and functionality. 
 
The Bottom Line 

The transition from Network CISO to Product CISO is not optional – it is imperative. In a world where every company is a technology company, security must be woven into the fabric of product development, not bolted on as an afterthought. 

A security vulnerability, CVE-2025-23359, has been identified in the NVIDIA Container Toolkit. This is a bypass of the original patch for CVE-2024-0132. The vulnerability was discovered by Wiz Research. 

The vulnerability, identified as CVE-2025-23359, is a bypass of a previous vulnerability (CVE-2024-0132) in the NVIDIA Container Toolkit. It involves a Time-of-Check Time-of-Use (TOCTOU) vulnerability. By manipulating file paths with a symbolic link during mount operations, an attacker can mount the host's root file system into a container, gaining unrestricted access. Although initial access is read-only, attackers can exploit Unix sockets to launch new, privileged containers, achieving full host compromise. 

Conditions/Preconditions: 

Who is at risk? 

Attack Vector: 

Affected Versions: 

Recommendations: 

XYZCorp's Solution: 

At XYZCorp, we understand the importance of security in your AI pipelines. That's why our vulnerability-free, signed images provide a trusted and verified source for your most critical workload. 

Stay safe! 

In February 2024, the JavaScript community faced another significant supply chain security incident when the popular lottie-player package was compromised. This attack serves as a stark reminder of the vulnerabilities in our modern software supply chain and the importance of maintaining robust security practices. Let’s dive into what happened, its implications, and how developers and organizations can protect themselves against similar threats.

What is lottie-player?

Before diving into the attack details, it's worth understanding what makes this incident particularly significant. Lottie-player is a widely-used web component that renders After Effects animations exported as JSON. It's particularly popular among developers who want to add high-quality animations to their web applications without compromising on performance.

The Attack - Timeline of Events

1. The attackers published a compromised version (2.0.2) of the package to npm

2. The malicious code was discovered by the security community

3. The compromised package was quickly removed from npm

4. A clean version (2.0.3) was released as a fix

Technical Analysis

The attack was sophisticated in its simplicity. The malicious code was designed to:

1. Collect environment variables from the system

2. Encode the collected data in base64 format

3. Exfiltrate the encoded data to an attacker-controlled domain

Impact Assessment

Immediate Risks

Exposure of sensitive environment variables

Potential compromise of:

Long-term Implications

The attack particularly targeted CI/CD environments where sensitive credentials are often present as environment variables. This could potentially give attackers access to:

Lessons Learned

1. Supply Chain Vulnerabilities

This incident highlights how dependent we've become on third-party packages and the implicit trust we place in package registries. A single compromised package can affect thousands of projects downstream.

2. Security Best Practices

The attack emphasizes the importance of:

3. Environment Variable Management

Organizations should review their practices regarding:

Mitigation Steps

If your project used lottie-player v2.0.2, take these immediate actions:

1. Update to the latest clean version

2. Rotate all potentially exposed credentials

3. Audit systems for signs of compromise

4. Review build logs for suspicious network activity

5. Implement additional monitoring for unauthorized data exfiltration

Prevention Strategies

Immediate Actions

1. Implement automated dependency scanning

2. Use package lockfiles religiously

3. Set up private npm registries

4. Monitor network traffic from build systems

Long-term Measures

1. Develop a dependency update policy

2. Implement runtime application self-protection (RASP)

3. Create an incident response plan for supply chain attacks

4. Regular security training for development teams

The Road Ahead: Strengthening JavaScript Security

The lottie-player supply chain attack serves as a crucial reminder that even popular and seemingly trustworthy packages can be compromised. It underscores the importance of maintaining robust security practices and never taking the security of our software supply chain for granted.

Supply chain security isn’t a one-time effort—it requires continuous vigilance, collaboration, and adoption of security best practices at every stage of the development lifecycle. By learning from incidents like this, the JavaScript community can work together to build a more secure future.

Call to Action

Has your team implemented a robust strategy for managing JavaScript dependencies? If not, now is the time to act. At Triam Security, we specialize in securing software supply chains, helping businesses protect their applications from emerging threats.

Our solution, CleanStart, offers a proactive approach to supply chain security. By providing hardened container and virtual machine images with a near-zero CVE footprint, CleanStart ensures that your development environment starts on a secure foundation. CleanStart also simplifies dependency management by offering trusted, secure images that reduce the attack surface, making it an essential tool in preventing incidents like the lottie-player compromise.

Reach out to us today to learn how CleanStart can safeguard your codebase and future-proof your software supply chain against evolving threats.

Introduction

Open-source software (OSS) has become a cornerstone of modern technology, driving innovation and collaboration across industries. From its humble beginnings to its current widespread adoption, OSS has transformed the tech landscape. This blog explores the journey of open-source software, its current state, and what the future holds.

The Past: The Birth and Growth of Open-Source

The roots of open-source software trace back to the early days of computing when software was freely shared among researchers. However, the commercialization of software in the 1970s and 1980s led to the rise of proprietary models.

The modern open-source movement began in the 1980s with Richard Stallman's Free Software Foundation (FSF) and the creation of the GNU General Public License (GPL). This license ensured that software could be freely used, modified, and distributed, fostering a culture of collaboration.

A pivotal moment was the release of the Linux kernel by Linus Torvalds in 1991. Combined with GNU tools, Linux demonstrated the power of community-driven development and laid the groundwork for the open-source ecosystem we know today.

The Present: Open-Source in the Mainstream

Today, open-source software is ubiquitous. Major tech companies like Google, Microsoft, and IBM actively contribute to and use open-source projects. Platforms like GitHub and GitLab have made it easier than ever for developers to collaborate on open-source projects.

Open-source software powers much of the internet infrastructure, from web servers like Apache and Nginx to databases like MySQL and PostgreSQL. Popular programming languages such as Python, JavaScript, and Ruby have thriving open-source communities that drive innovation and support.

The adoption of open-source principles has extended beyond software to hardware, data, and even scientific research. Open-source hardware projects like Arduino and Raspberry Pi have democratized access to technology, while open data initiatives promote transparency and collaboration in research.

For CISOs ,CTOs and etc, the adoption of open-source software offers several advantages:

The Future: Trends and Predictions

The future of open-source software looks promising, with several trends shaping its evolution:

  1. Corporate Stewardship: Increased corporate involvement will drive more structured and sustainable open-source projects. Companies will continue to invest in OSS to drive innovation and maintain competitive advantage.
  2. Enhanced Security Practices: With OSS becoming critical to infrastructure, security will be a top priority. Initiatives like the Open-Source Security Foundation (OpenSSF) aim to improve the security posture of open-source projects through better practices and tooling.
  3. AI and Machine Learning: Open-source frameworks like TensorFlow and PyTorch are at the forefront of AI and machine learning advancements. These projects democratize access to powerful tools and foster innovation in AI-driven solutions.
  4. Decentralization and Blockchain: Open-source principles align well with decentralized technologies. Blockchain projects like Ethereum and Hyperledger exemplify how OSS can drive the development of decentralized applications and systems.
  5. Diversity and Inclusion: Efforts to increase diversity within open-source communities will lead to more innovative and inclusive solutions. Diverse teams bring varied perspectives, enhancing the quality and relevance of OSS projects.

Conclusion

Open-source software has evolved from an alternative approach to a foundational element of modern technological infrastructure. For forward-thinking organizations, it represents not just a development model, but a strategic approach to innovation, collaboration, and technological advancement.

By understanding its past, embracing its present, and preparing for its future, technology leaders can transform open-source from a mere tool to a powerful competitive differentiator.

In today's rapidly evolving landscape of containerized applications, ensuring the integrity and authenticity of container images has become paramount. Container image signing is a crucial security practice that addresses these concerns, providing a robust mechanism to verify the origin and integrity of container images throughout the software supply chain

What is Container Image Signing? 

Container image signing is the process of digitally signing container images to cryptographically verify their authenticity and integrity. This process creates a digital signature that can be used to validate that an image hasn't been tampered with and comes from a trusted source. 

Why is Container Image Signing Important? 

  1. Supply Chain Security: As organizations increasingly rely on third-party container images, signing helps ensure that these images haven't been compromised during distribution. 
  1. Compliance: Many regulatory standards require verifiable integrity for software components, making image signing essential for compliance. 
  1. Tamper Detection: Signed images allow for the detection of any unauthorized modifications, preventing the deployment of potentially compromised containers. 
  1. Trust Establishment: Signing establishes a chain of trust from the image creator to the end user, crucial in multi-tenant or distributed environments. 

How Container Image Signing Works 

The process of container image signing typically involves the following steps: 

  1. Creating the Image: The container image is built from the source code and dependencies. 
  1. Generating the Signature: A cryptographic hash of the image is created, and this hash is then signed using a private key. The resulting signature is attached to the image. 
  1. Storing the Signature: The signed image, along with its signature, is stored in a container registry. 
  1. Verifying the Signature: When the image is pulled for deployment, the signature is verified using the corresponding public key. This ensures that the image has not been altered and is from a trusted source. 

Best Practices for Implementing Container Image Signing 

  1. Integrate into CI/CD: Automate signing as part of your continuous integration and deployment pipelines. 
  1. Key Management: Implement robust key management practices, including secure storage and rotation of signing keys. 
  1. Policy Enforcement: Use admission controllers or similar mechanisms to enforce the use of signed images in your container orchestration platform. 
  1. Regular Audits: Conduct regular audits of your signing process and the signatures themselves. 
  1. Education: Ensure that all team members understand the importance of image signing and the processes involved. 

Popular Tools for Container Image Signing 

Several tools are available for signing container images, each with its own features and capabilities: 

Conclusion 

Container image signing is a critical component of a comprehensive container security strategy. By providing a means to verify the integrity and authenticity of container images, it helps organizations build trust in their software supply chain and mitigate risks associated with compromised or tampered images. As container technologies continue to evolve, image signing will likely become an increasingly standard practice in ensuring the security and reliability of containerized applications.

In the world of software development, open source and container technologies have revolutionized the way we build, deploy, and manage applications. However, several myths persist about their security and usage. Let’s debunk some of these common misconceptions. In the realm of software development, open source and container technologies have transformed the way applications are built, deployed, and managed. Despite their widespread adoption, several myths about their security and usage persist. This article aims to debunk these common misconceptions and provide a clearer understanding of the realities.

Myth 1: “Official” Container Images on Docker Hub Are Free of Vulnerabilities

Fact: While Docker Hub provides a wide range of official images, they are not immune to vulnerabilities. These images can still contain security risks, including outdated software versions or unpatched vulnerabilities. It’s essential to regularly scan and assess these images for potential issues. Relying solely on the “official” label can lead to a false sense of security. Regular vulnerability scanning and updates are crucial to maintaining a secure environment.

Myth 2: Free SCA Tools Are Sufficient for CI/CD Security

Fact: Free Software Composition Analysis (SCA) tools are helpful, but they may lack advanced features like vulnerability prioritization, license compliance, and integration with CI/CD pipelines. While they can provide a basic level of security, they often fall short in comprehensive coverage. Investing in commercial SCA tools can offer more robust security features, ensuring that your CI/CD pipeline is well-protected against potential threats.

Myth 3: Updating Images Is a One-Time Security Effort

Fact: Regularly updating container images is crucial, but it’s not a one-time task. New vulnerabilities emerge over time, and maintaining up-to-date images ensures that security patches are applied promptly. Continuous monitoring and updating are necessary to protect against newly discovered vulnerabilities. This ongoing effort helps in maintaining the integrity and security of your applications.

Myth 4: Containers Are Inherently Secure

Fact: Containers offer isolation benefits, but they aren’t foolproof. Security vulnerabilities can exist in the container image, runtime environment, or misconfigurations. While containers can enhance security by isolating applications, they still require proper configuration and management. Misconfigurations or vulnerabilities in the underlying infrastructure can compromise the security of containerized applications.

Myth 5: Finding and Fixing Vulnerabilities Leads to Zero Vulnerabilities

Fact: This is a myth. The software development process is ongoing, and new vulnerabilities can emerge due to evolving technologies, coding errors, or unforeseen attack vectors. Continuous security practices are crucial to maintaining a secure environment. Even after addressing known vulnerabilities, new ones can surface, necessitating ongoing vigilance and proactive security measures.

Myth 6: Scanning Container Images Once Is Enough

Fact: Container images can be built from various sources and may change over time. New vulnerabilities can be introduced through updates or by incorporating compromised dependencies. Regular vulnerability scanning of container images is essential to identify and address potential issues. A one-time scan is insufficient; continuous scanning ensures that any new vulnerabilities are promptly detected and mitigated.

Myth 7: Container Lifespans Are Very Short in Production

Fact: While containers are designed for agility and scalability, their lifespans can vary. They may run for days, weeks, or even months. Proper monitoring, logging, and security practices are essential throughout their lifecycle. Understanding the actual lifespan of your containers and implementing appropriate security measures ensures that they remain secure regardless of their duration in production.

Conclusion

Open source and container technologies offer significant benefits, but they also come with their own set of challenges and misconceptions. By understanding and addressing these myths, organizations can better secure their applications and infrastructure. Regular updates, continuous monitoring, and investing in comprehensive security tools are key to maintaining a robust security posture in the dynamic landscape of software development.

By debunking these myths, we can foster a more informed approach to using open source and container technologies. This not only enhances security but also maximizes the potential of these powerful tools in modern software development.

In today’s digital landscape, securing the software supply chain has become a critical priority. With the increasing complexity of software development and deployment, ensuring the integrity and security of software artifacts is paramount. This is where SLSA (Supply Chain Levels for Software Artifacts) comes into play.

What is SLSA?

SLSA, pronounced “salsa,” is a framework designed to enhance the security of software supply chains. It provides a set of standards and practices aimed at preventing tampering, improving integrity, and securing the software development lifecycle. SLSA is structured into four levels, each representing a progressively stronger security posture.

The Four Levels of SLSA

  1. Level 1: Basic Security Practices
    At this level, the focus is on establishing basic security practices. This includes using version control systems, maintaining build scripts, and ensuring that source code is available and traceable. The goal is to create a foundation for more advanced security measures.
  2. Level 2: Enhanced Security Measures
    Level 2 builds upon the basic practices by introducing more stringent security measures. This includes using signed and verified build artifacts, implementing automated build processes, and ensuring that dependencies are managed securely. The emphasis is on reducing the risk of tampering and ensuring the integrity of the build process.
  3. Level 3: Advanced Security Controls
    At Level 3, organizations implement advanced security controls. This includes using reproducible builds, where the build process can be repeated with the same results and ensuring that all build steps are logged and auditable. The focus is on creating a transparent and verifiable build process.
  4. Level 4: Comprehensive Security Assurance
    Level 4 represents the highest level of security assurance. This includes using isolated and hermetic builds, where the build environment is completely isolated from external influences, and implementing end-to-end provenance, where the entire build process is documented and traceable. The goal is to achieve the highest level of confidence in the security and integrity of the software supply chain.

Implementing SLSA

To implement SLSA within an organization, several steps should be taken:

  1. Assessment: Begin by assessing current practices against the SLSA levels to identify gaps in security and provenance.
  2. Training: Educate development teams on the importance of secure software practices and the specifics of each SLSA level.
  3. Tooling: Invest in tools that facilitate compliance with SLSA requirements, such as automated build systems that enforce integrity checks.
  4. Continuous Improvement: Regularly review and update practices to align with evolving threats and advancements in technology.

Benefits of Implementing SLSA

Implementing SLSA offers several benefits for organizations looking to secure their software supply chains:

Conclusion

In an era where software supply chain attacks are becoming increasingly common, adopting a robust framework like SLSA is essential. By adopting SLSA, companies can enhance their security posture, build trust with stakeholders, and ensure compliance with emerging regulations. As software continues to evolve, frameworks like SLSA will play an essential role in safeguarding the integrity of software artifacts across industries.

The recent discovery of a critical vulnerability in NVIDIA’s Container Toolkit (CVE-2024-0132) has sent shockwaves through the AI and DevOps communities. This vulnerability serves as a stark reminder of the hidden dangers lurking within our AI infrastructure. Here, we delve into the key lessons learned and the steps that AI practitioners and DevOps teams must take to safeguard their systems.

1. Trust, but Verify

Even images from reputable sources like DockerHub can harbor unforeseen risks. This incident underscores the importance of scrutinizing the source and content of your container images. Always verify the integrity and security of the images you use, regardless of their origin.

2. Container ≠ Bulletproof

Containers are often perceived as secure, but this vulnerability has shown that they are not impenetrable. The CVE-2024-0132 flaw allowed attackers to gain full host system access, highlighting the need for multi-layered security measures. Relying solely on containers for security is insufficient; additional layers of protection are essential.

3. AI Infrastructure is the New Frontier

While much focus is placed on securing AI models, this incident reveals that vulnerabilities in AI tooling and infrastructure pose immediate and critical risks. As AI continues to evolve, so too must our approach to securing the entire AI ecosystem, from models to the underlying infrastructure.

4. Shared Environments Need Extra Care

Using GPU-enabled containers in multi-tenant setups increases the risk of exploitation. This vulnerability highlights the need to reassess isolation strategies in shared environments. Ensuring robust isolation mechanisms can prevent attackers from leveraging vulnerabilities to compromise entire systems.

5. Stay Updated, Stay Safe

Regular updates are your first line of defense against vulnerabilities. NVIDIA’s prompt response with a patch for CVE-2024-0132 underscores the importance of timely updates. Keeping your systems and software up to date is crucial in mitigating the risks posed by newly discovered vulnerabilities.

Conclusion

The AI revolution brings immense possibilities, but it also introduces new vulnerabilities. The discovery of CVE-2024-0132 in NVIDIA’s Container Toolkit is a wake-up call for all AI practitioners and DevOps teams. By staying vigilant, keeping systems updated, and adopting a multi-layered security approach, we can protect our AI infrastructure from potential threats. Remember, your AI’s security is only as strong as its weakest link.

The lessons learned from this incident are clear: trust but verify, implement multi-layered security, focus on the entire AI infrastructure, take extra care in shared environments, and prioritize regular updates.

Copyright 2025 © Triam Security
|
Privacy policy
|
Terms & Conditions
Download Data Sheet Form
crossmenuchevron-downarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram