Song-a-Week 5

Lifeline

Despite the presence of others, loneliness can consume us, and as strong as we can be, life often leaves us in places where we have to rely on others for emotional stability. Lifeline describes a point in my life where I could only find solace and strength in my partner, who was my lifeline and a source of peace amidst life’s chaos.

Implementing Zero-Trust Azure DevOps Environment for Secure Code Deployment

At a previous job, where I was a senior sysadmin, I put together a proposal for a zero-trust approach to building a shipping code for both customer-facing apps and IaC work. I don’t know if it’s been adopted, but my current company is discussing something similar. Here’s a general outline, i’d be interested to get other folks thoughts on this approach for critical/regulated workloads. Note that this is based around Microsoft Azure, but the general methodology would be flexible, I think.

Introduction

With the growing complexity and evolving security threats, it’s important to consider a Zero-Trust model in DevOps processes. This proposal discusses a hypothetical Zero-Trust Azure DevOps environment that can ship code to production using Azure service principals and completely restricting human users from direct access to production systems.

Zero-Trust Philosophy

Zero-Trust is based on the principle of “never trust, always verify.” In a DevOps cycle, this means that no entity— internal or external—gets automatic access to resources. Authentication and authorization are mandatory for every interaction, even within our internal network.

Key Components

Azure DevOps

The CI/CD pipeline would be set up in Azure DevOps, which provides access control, change tracking, and workflow configs.

Azure Service Principals

Azure Service Principals would provide identities for our pipelines and applications. These credentials are securely stored and managed in Azure Key Vault.

Production Systems

Production systems are also hosted in Azure, with strict controls in place.

Implementation Strategy

Service Principals for Resource Access

Human users do not need direct access to our production environment; instead, Azure Service Principals are used. They handle all interactions with our Azure resources during the build and release pipelines. These service principals have only the permissions required for their specific role and are limited to specific scopes in Azure.

Multi-Factor Authentication and Conditional Access

Within Azure DevOps, consider implementing Multi-Factor Authentication (MFA) and conditional access policies, ensuring that only authorized personnel can make changes to the DevOps configurations.

Automated Testing and Code Scans

The pipeline includes rigorous automated tests, including security scans, to ensure that the code is both functional, in line with best practices, and free from vulnerabilities (as much as reasonably possible.

Secure Credential Storage

All sensitive information (API keys, database credentials client secrets), is stored securely in Azure Key Vault and only accessed by authorized service principals during the pipeline execution. 

Logging and Monitoring

Azure Security Center/Defender and Sentinel are configured to provide real-time alerts and logging.

Immutable Infrastructure

Leverage DevOps pipelines to implement an immutable production infrastructure, meaning once a resource is deployed, it’s never modified. Instead, when changes are needed, the instance is replaced with a new instance upon the next release. This limits the risk of configuration drift.

Advantages

  • Enhanced Security: Zero trust reduces the attack surface by eliminating implicit trust.
  • Compliance: Assuming good management of the underlying infrastructure (sovereignty, encryption, etc.), helps to meet regulatory compliance requirements for data protection and access control.
  • Auditability: Each interaction is logged, making it easier to monitor and audit activities.
  • Operational Efficiency: Automated pipelines reduce the possibility of random error factor, improving reliability. Errors in automated processed can be identified, analyzed and fixed more quickly since the fix can be rolled out from one central source.

Challenges

Complexity and Cost

  1. Management Overhead: The Zero-Trust model inherently requires more management. This could include more time spent on configuring and monitoring pipelines, access controls, encryption, and logging.
  2. Cost: More resources may be needed for continuous monitoring, more advanced security tools, and additional Azure services. This could increase overall operating costs.

Operational Risks

  1. Employee Training: With a more complicated security architecture, there’s a need for additional in house training, or outside resources for adoption and overall solution design.
  2. Disaster Recovery and Rollback: The immutable infrastructure approach is secure but might complicate rollback strategies. In the event of a flawed release, rolling back to a previous state could become more complex and time-consuming if not configured properly

Security Concerns

  1. Insider Threats: While the system is designed to protect against external threats, it might still be vulnerable to internal threats, either malicious or accidental.
  2. Service Principal Compromise: If an attacker gains access to a service principal, they could potentially have the same level of access as the service principal itself, which could be highly privileged.
  3. Monitoring Blind Spots: Continuous monitoring is a critical part of this architecture, but it’s not foolproof. There may be blind spots and other configuration challenges with the monitoring system.

Dependencies

  1. Vendor Lock-in: The system heavily relies on Azure services. If there are service outages or simply as Azure features and costs change, the system would need to be flexible.
  2. Software Bugs and Vulnerabilities: Even though the system is designed to be secure, it’s still subject to vulnerabilities in the software that it uses, whether that’s in Azure itself or in other components of the DevOps pipeline.

Policy and Compliance

  1. Regulatory Changes: Compliance requirements change, and there’s a risk that future changes could mean significant changes to this architecture.
  2. Data Residency: Given that Azure is a global cloud provider, issues might arise regarding where data is stored and how it is transmitted internationally, meaning strict management would be required to stay in compliance with data sovereignty laws.

Auditing and Governance

  1. Auditing Complexity: While each interaction is logged, the sheer volume of logs could make auditing a monumental task.
  2. Governance: Enforcing policies consistently across such a complex environment can be challenging.

What sort of magic is fingerprint authentication?

All biometric identification forms broadly contain four qualities: everyone has it, everyone’s is unique, the characteristic remains stable over time, and it should be easy to collect. Fingerprints, unlike some other forms, like facial scans, generally require the cooperation of the individual being enrolled in biometric authentication; when the characteristics of their fingerprint will be saved as a reference template. Any time a new biometric sample is submitted (the trial biometric), it is tested against the reference template. Since biometric systems don’t work to a level of granularity to ensure 100% identical samples each time, a score is calculated for the trial biometric and, as long as it meets the threshold for authentication, the user is authenticated.

Fingerprint patterns are stored by capturing images of the fingerprint which allow the system to observe the ridges and valleys in the fingerprint. The image of the fingerprint is then stored and divided in to regions which contain recognizable patterns, which can then be further subdivided to collect data within each region known as minutiae; data on which is captured through x and y axis, class of pattern, and angle. To fully identify a fingerprint sample, a combination of the number of regions, number of ridges, and minutiae are used.

Password comparison can be done by comparing an entered password with a one way hash stored (for example, as PVD) on the given system. Depending on the method of accessing the system, authentication may take more of a challenge-and-response format, where the system being accessed generates a challenge c, which is transmitted to the system from which the user is authenticating. The remote user’s system uses the challenge and a shared password generated between the systems to generate a response r, where r = g(p, c). The response value is checked against an expected value calculated by the system being accessed, and if the two values r = r’ then authentication succeeds. 

While both password and biometric authentication check the received data (or a scanned fingerprint, for example), against an expected value, those values look very different. A password hash, response value, etc. is a calculated field based on the initial input of the password created by the user or system, and can be easily changed; while a fingerprint (or other biometric data) is static (unless the person undergoes a physical change), and the data being compared between the reference and the trial biometric are based on data that was observed from the biometric sample, rather than a calculated field.

BIOS Malware? Scary, but preventable.

There are different types of BIOS out there, though mostly in computing we encounter 16-bit conventional BIOS, or BIOS firmware based on UEFI specifications. Despite their differences, in many cases the two terms are both referred to as BIOS, though there is an important distinction when it comes to security settings like Secure Boot, or mitigating vulnerabilities like BootHole. 

There have been a few well-known BIOS vulnerabilities in recent years, to include the BootHole vulnerability, which could allow for the injection of insecure code in to the bootloader. Normally, with a Secure Boot system, there are two databases, Allow (db) and Disallow (dbx), with access secured by platform encryption keys.

The db and dbx databases are used to verify the signatures of executables called during the startup process, checked against central databases like the Microsoft 3rd party UEFI CA.

Because of the many components involved in the Secure Boot process, vulnerabilities can be found in both the software, like an attacker replacing the bootloader with a vulnerable version of the bootloader still signed by the CA. In order to keep systems secure, admins have to keep on top of the dbx database and ensure the latest secure versions are distributed to physical systems (doesn’t really apply to VMs, though it would apply to the hardware they run on).

There were also vulnerabilities discovered last year in intel processors, where input validation could be bypassed, or vulnerabilities in the flow control processes exploited. It can be challenging to mitigate these sorts of vulnerabilites right away since manufacturers don’t push out updates to BIOS as often as operating systems and applications.

NIST has a document (https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-147.pdf (Links to an external site.)) covering the protection of the BIOS, which points out the following threats to the BIOS:

Supply-chain attack

User-initiated BIOS updates

Network-based system compromise

Focusing on how those threats could lead to the rolling back of a system BIOS with security vulnerabilities, possibly without the user even noticing.

NIST’s first recommendation for authenticated BIOS security is “the authenticated BIOS update mechanism employs digital signatures to ensure the authenticity of the BIOS update image. To update the BIOS using the authenticated BIOS update mechanism, there shall be a Root of Trust for Update (RTU) that contains a signature verification algorithm and a key store that includes the public key needed to verify the signature on the BIOS update image. The key store and the signature verification algorithm shall be stored in a protected fashion on the computer system and shall be modifiable only using an authenticated update mechanism or a secure local update mechanism.” Using a process that mirrors public key encryption can give organizations greater control over BIOS updates. They also provide recommendations for securing local updates to BIOS and ensuring system integrity during and after the update process takes place.

Additionally, outside of using things like UEFI and Secure Boot, there are security products, like Dell enterprise security suite, and Intel Boot Guard, though many are geared towards the enterprise, not the individual home user.

How do MFA Tokens work, anyway?

So you’re curious about the backend processes for tokens that generate one-time-passwords (OTPs) such as the RSA token, or something like an authenticator app?

First, it’s important to note, that there’s a distinction between something like a Duo challenge and response and a code generator, like Microsoft authenticator and similar apps. An app like Duo, when used to press a button on our phones to allow login to websites or other systems is not simply a one-time-password generator, but is actually acting as an “out-of-band” authentication device. The distinction between that and something like entering a randomly generated code on an RSA token or similar, is that the out-of-band device actually establishes a secure channel with an authentication server, using the established secure channel to transmit an authentication challenge from the remote server to the device, like an OTP, or a check mark to press. 

Devices like RSA tokens are single-factor OTP devices, which differ from out-of-band devices in a few important ways. First, instead of having an authentication challenge presented to the device, a secret (like the code on an RSA token) is cryptographically generated separate and apart from the verifying device, with the secret being independently generated by the token and compared by the verifier. There are also two values on a single factor OTP token which remain on the device for its lifetime: a symmetric key, shared with the remote server that verifies authentication, and a nonce which is typically either a calculated value, or is based on a real-time clock.

In the case of an RSA token, the value is based on a clock with time synched when the device is first deployed. There’s a level of sensitivity between the token and it’s verifying server (usually 5 seconds or less), to allow for some slight time drift over the life of the device, and the calculated value is changed at a regular interval. Once the value is transmitted, the remote verifier compares the value to its own calculated value and, if they match, the device is authenticated. 

A device like a YubiKey differs a bit from an RSA token, as instead of generating a code to transmitted by user input, it’s directly connected to the user endpoint, where it uses an internal cryptographic mechanism to authenticate. It’s important to consider these devices not as standalone single-factor processes, but as part of a multi-factor authentication strategy, such as a combination of RSA token code and username/password for authentication, or a website logon/password and DUO.

Going even further with token authentication devices lands you in the area of multi-factor devices like smart cards, which require both possession of the card (something-you-have) and knowledge of a corresponding PIN code (something-you-know) and presence of a user account on the device being accessed (something-you-are). A familiar example of this is the DoD CAC card. To log on to a DoD system, you must have a smart card, the PIN to that card, and an account corresponding to the subject or subject alternate name of one of the certificates on the card.