feeds

by zer0x0ne — on

cover-image

some of my favourite websites: null byte the hackers news hackaday pen test partners cso online infosec writers security week xkcd



xkcd

Retrieved title: xkcd.com, 3 item(s)
Hammer Incident

I still think the Cold Stone Creamery partnership was a good idea, but I should have asked before doing the first market trials during the cryogenic mirror tests.

Spike Proteins

Ugh, it's stuck to my laptop. It must have bound to the ACER-2 receptor.

Checkbox

Check check check ... chhecck chhecck chhecck ... check check check

Google Online Security Blog

Retrieved title: Google Online Security Blog, 3 item(s)
Rust in the Android platform

Correctness of code in the Android platform is a top priority for the security, stability, and quality of each Android release. Memory safety bugs in C and C++ continue to be the most-difficult-to-address source of incorrectness. We invest a great deal of effort and resources into detecting, fixing, and mitigating this class of bugs, and these efforts are effective in preventing a large number of bugs from making it into Android releases. Yet in spite of these efforts, memory safety bugs continue to be a top contributor of stability issues, and consistently represent ~70% of Android’s high severity security vulnerabilities.

In addition to ongoing and upcoming efforts to improve detection of memory bugs, we are ramping up efforts to prevent them in the first place. Memory-safe languages are the most cost-effective means for preventing memory bugs. In addition to memory-safe languages like Kotlin and Java, we’re excited to announce that the Android Open Source Project (AOSP) now supports the Rust programming language for developing the OS itself.

Systems programming

Managed languages like Java and Kotlin are the best option for Android app development. These languages are designed for ease of use, portability, and safety. The Android Runtime (ART) manages memory on behalf of the developer. The Android OS uses Java extensively, effectively protecting large portions of the Android platform from memory bugs. Unfortunately, for the lower layers of the OS, Java and Kotlin are not an option.


Lower levels of the OS require systems programming languages like C, C++, and Rust. These languages are designed with control and predictability as goals. They provide access to low level system resources and hardware. They are light on resources and have more predictable performance characteristics.

For C and C++, the developer is responsible for managing memory lifetime. Unfortunately, it's easy to make mistakes when doing this, especially in complex and multithreaded codebases.


Rust provides memory safety guarantees by using a combination of compile-time checks to enforce object lifetime/ownership and runtime checks to ensure that memory accesses are valid. This safety is achieved while providing equivalent performance to C and C++.

The limits of sandboxing

C and C++ languages don’t provide these same safety guarantees and require robust isolation. All Android processes are sandboxed and we follow the Rule of 2 to decide if functionality necessitates additional isolation and deprivileging. The Rule of 2 is simple: given three options, developers may only select two of the following three options.

For Android, this means that if code is written in C/C++ and parses untrustworthy input, it should be contained within a tightly constrained and unprivileged sandbox. While adherence to the Rule of 2 has been effective in reducing the severity and reachability of security vulnerabilities, it does come with limitations. Sandboxing is expensive: the new processes it requires consume additional overhead and introduce latency due to IPC and additional memory usage. Sandboxing doesn’t eliminate vulnerabilities from the code and its efficacy is reduced by high bug density, allowing attackers to chain multiple vulnerabilities together.

Memory-safe languages like Rust help us overcome these limitations in two ways:

  1. Lowers the density of bugs within our code, which increases the effectiveness of our current sandboxing.
  2. Reduces our sandboxing needs, allowing introduction of new features that are both safer and lighter on resources.

But what about all that existing C++?

Of course, introducing a new programming language does nothing to address bugs in our existing C/C++ code. Even if we redirected the efforts of every software engineer on the Android team, rewriting tens of millions of lines of code is simply not feasible.

The above analysis of the age of memory safety bugs in Android (measured from when they were first introduced) demonstrates why our memory-safe language efforts are best focused on new development and not on rewriting mature C/C++ code. Most of our memory bugs occur in new or recently modified code, with about 50% being less than a year old.

The comparative rarity of older memory bugs may come as a surprise to some, but we’ve found that old code is not where we most urgently need improvement. Software bugs are found and fixed over time, so we would expect the number of bugs in code that is being maintained but not actively developed to go down over time. Just as reducing the number and density of bugs improves the effectiveness of sandboxing, it also improves the effectiveness of bug detection.

Limitations of detection

Bug detection via robust testing, sanitization, and fuzzing is crucial for improving the quality and correctness of all software, including software written in Rust. A key limitation for the most effective memory safety detection techniques is that the erroneous state must actually be triggered in instrumented code in order to be detected. Even in code bases with excellent test/fuzz coverage, this results in a lot of bugs going undetected.

Another limitation is that bug detection is scaling faster than bug fixing. In some projects, bugs that are being detected are not always getting fixed. Bug fixing is a long and costly process.

Each of these steps is costly, and missing any one of them can result in the bug going unpatched for some or all users. For complex C/C++ code bases, often there are only a handful of people capable of developing and reviewing the fix, and even with a high amount of effort spent on fixing bugs, sometimes the fixes are incorrect.

Bug detection is most effective when bugs are relatively rare and dangerous bugs can be given the urgency and priority that they merit. Our ability to reap the benefits of improvements in bug detection require that we prioritize preventing the introduction of new bugs.

Prioritizing prevention

Rust modernizes a range of other language aspects, which results in improved correctness of code:

  • Memory safety - enforces memory safety through a combination of compiler and run-time checks.
  • Data concurrency - prevents data races. The ease with which this allows users to write efficient, thread-safe code has given rise to Rust’s Fearless Concurrency slogan.
  • More expressive type system - helps prevent logical programming bugs (e.g. newtype wrappers, enum variants with contents).
  • References and variables are immutable by default - assist the developer in following the security principle of least privilege, marking a reference or variable mutable only when they actually intend it to be so. While C++ has const, it tends to be used infrequently and inconsistently. In comparison, the Rust compiler assists in avoiding stray mutability annotations by offering warnings for mutable values which are never mutated.
  • Better error handling in standard libraries - wrap potentially failing calls in Result, which causes the compiler to require that users check for failures even for functions which do not return a needed value. This protects against bugs like the Rage Against the Cage vulnerability which resulted from an unhandled error. By making it easy to propagate errors via the ? operator and optimizing Result for low overhead, Rust encourages users to write their fallible functions in the same style and receive the same protection.
  • Initialization - requires that all variables be initialized before use. Uninitialized memory vulnerabilities have historically been the root cause of 3-5% of security vulnerabilities on Android. In Android 11, we started auto initializing memory in C/C++ to reduce this problem. However, initializing to zero is not always safe, particularly for things like return values, where this could become a new source of faulty error handling. Rust requires every variable be initialized to a legal member of its type before use, avoiding the issue of unintentionally initializing to an unsafe value. Similar to Clang for C/C++, the Rust compiler is aware of the initialization requirement, and avoids any potential performance overhead of double initialization.
  • Safer integer handling - Overflow sanitization is on for Rust debug builds by default, encouraging programmers to specify a wrapping_add if they truly intend a calculation to overflow or saturating_add if they don’t. We intend to enable overflow sanitization for all builds in Android. Further, all integer type conversions are explicit casts: developers can not accidentally cast during a function call when assigning to a variable or when attempting to do arithmetic with other types.

Where we go from here

Adding a new language to the Android platform is a large undertaking. There are toolchains and dependencies that need to be maintained, test infrastructure and tooling that must be updated, and developers that need to be trained. For the past 18 months we have been adding Rust support to the Android Open Source Project, and we have a few early adopter projects that we will be sharing in the coming months. Scaling this to more of the OS is a multi-year project. Stay tuned, we will be posting more updates on this blog.

Java is a registered trademark of Oracle and/or its affiliates.

Thanks Matthew Maurer, Bram Bonne, and Lars Bergstrom for contributions to this post. Special thanks to our colleagues, Adrian Taylor for his insight into the age of memory vulnerabilities, and to Chris Palmer for his work on “The Rule of 2” and “The limits of Sandboxing”.

Announcing the Android Ready SE Alliance

When the Pixel 3 launched in 2018, it had a new tamper-resistant hardware enclave called Titan M. In addition to being a root-of-trust for Pixel software and firmware, it also enabled tamper-resistant key storage for Android Apps using StrongBox. StrongBox is an implementation of the Keymaster HAL that resides in a hardware security module. It is an important security enhancement for Android devices and paved the way for us to consider features that were previously not possible.

StrongBox and tamper-resistant hardware are becoming important requirements for emerging user features, including:

  • Digital keys (car, home, office)
  • Mobile Driver’s License (mDL), National ID, ePassports
  • eMoney solutions (for example, Wallet)

All these features need to run on tamper-resistant hardware to protect the integrity of the application executables and a user’s data, keys, wallet, and more. Most modern phones now include discrete tamper-resistant hardware called a Secure Element (SE). We believe this SE offers the best path for introducing these new consumer use cases in Android.

In order to accelerate adoption of these new Android use cases, we are announcing the formation of the Android Ready SE Alliance. SE vendors are joining hands with Google to create a set of open-source, validated, and ready-to-use SE Applets. Today, we are launching the General Availability (GA) version of StrongBox for SE. This applet is qualified and ready for use by our OEM partners. It is currently available from Giesecke+Devrient, Kigen, NXP, STMicroelectronics, and Thales.

It is important to note that these features are not just for phones and tablets. StrongBox is also applicable to WearOS, Android Auto Embedded, and Android TV.

Using Android Ready SE in a device requires the OEM to:

  1. Pick the appropriate, validated hardware part from their SE vendor
  2. Enable SE to be initialized from the bootloader and provision the root-of-trust (RoT) parameters through the SPI interface or cryptographic binding
  3. Work with Google to provision Attestation Keys/Certificates in the SE factory
  4. Use the GA version of the StrongBox for the SE applet, adapted to your SE
  5. Integrate HAL code
  6. Enable an SE upgrade mechanism
  7. Run CTS/VTS tests for StrongBox to verify that the integration is done correctly

We are working with our ecosystem to prioritize and deliver the following Applets in conjunction with corresponding Android feature releases:

  • Mobile driver’s license and Identity Credentials
  • Digital car keys

We already have several Android OEMs adopting Android Ready SE for their devices. We look forward to working with our OEM partners to bring these next generation features for our users.

Please visit our Android Security and Privacy developer site for more info.

Announcing the winners of the 2020 GCP VRP Prize

We first announced the GCP VRP Prize in 2019 to encourage security researchers to focus on the security of Google Cloud Platform (GCP), in turn helping us make GCP more secure for our users, customers, and the internet at large. In the first iteration of the prize, we awarded $100,000 to the winning write-up about a security vulnerability in GCP. We also announced that we would reward the top 6 submissions in 2020 and increased the total prize money to $313,337.

2020 turned out to be an amazing year for the Google Vulnerability Reward Program. We received many high-quality vulnerability reports from our talented and prolific vulnerability researchers.


Vulnerability reports received year-over-year



This trend was reflected in the submissions we received for the GCP VRP Prize. After careful evaluation of the many innovative and high-impact vulnerability write-ups we received this year, we are excited to announce the winners of the 2020 GCP VRP Prize:
  • First Prize, $133,337: Ezequiel Pereira for the report and write-up RCE in Google Cloud Deployment Manager. The bug discovered by Ezequiel allowed him to make requests to internal Google services, authenticated as a privileged service account. Here's a video that gives more details about the bug and the discovery process.

  • Second Prize, $73,331: David Nechuta for the report and write-up 31k$ SSRF in Google Cloud Monitoring led to metadata exposure. David found a Server-side Request Forgery (SSRF) bug in Google Cloud Monitoring's uptime check feature. The bug could have been used to leak the authentication token of the service account used for these checks.
  • Third Prize, $73,331: Dylan Ayrey and Allison Donovan for the report and write-up Fixing a Google Vulnerability. They pointed out issues in the default permissions associated with some of the service accounts used by GCP services.
  • Fourth Prize, $31,337: Bastien Chatelard for the report and write-up Escaping GKE gVisor sandboxing using metadata. Bastien discovered a bug in the GKE gVisor sandbox's network policy implementation due to which the Google Compute Engine metadata API was accessible. 
  • Fifth Prize, $1,001: Brad Geesaman for the report and write-up CVE-2020-15157 "ContainerDrip" Write-up. The bug could allow an attacker to trick containerd into leaking instance metadata by supplying a malicious container image manifest.
  • Sixth Prize, $1,000: Chris Moberly for the report and write-up Privilege Escalation in Google Cloud Platform's OS Login. The report demonstrates how an attacker can use DHCP poisoning to escalate their privileges on a Google Compute Engine VM.
Congratulations to all the winners! If we have piqued your interest and you would like to enter the competition for a GCP VRP Prize in 2021, here’s a reminder on the requirements.
  • Find a vulnerability in a GCP product (check out Google Cloud Free Program to get started)
  • Report it to the VRP (you might get rewarded for it on top of the GCP VRP Prize!)
  • Create a public write-up
  • Submit it here
Make sure to submit your VRP reports and write-ups before December 31, 2021 at 11:59 GMT. Good luck! You can learn more about the prize for this year here. We can't wait to see what our talented vulnerability researchers come up with this year!

hackaday

Retrieved title: Hackaday, 3 item(s)
Laser-Cut Solder Masks from Business Cards

There are plenty of ways to make printed circuit boards at home but for some features it’s still best to go to a board shop. Those features continue to decrease in number, but not a lot of people can build things such as a four-layer board at home. Adding a solder mask might be one of those features for some, but if you happen to have a laser cutter and a few business cards sitting around then this process is within reach of the home builder too.

[Jeremy Cook] is lucky enough to have a laser cutter around, and he had an idea to use it to help improve his surface mount soldering process. By cutting the solder mask layer into a business card with the laser cutter, it can be held on top of a PCB and then used as a stencil to add the solder paste more easily than could otherwise be done. It dramatically decreases the amount of time spent on this part of the process, especially when multiple boards are involved since the stencil can be used multiple times.

While a laser cutter certainly isn’t a strict requirement, it certainly does help over something like an X-acto knife. [Jeremy] also notes that this process is sometimes done with transparency film or even Kapton, which we have seen a few times before as well.

Uplink System For High-Altitude Balloons

Most uses of high-altitude balloons are fairly simple: send balloon up, have it beam down measurements and images. While this is indeed straightforward, it is also very limiting. This is why [Dave Akerman] has been working on adding to the HAB balloons he regularly flies. This builds on the work [Dave] did back in 2015 with adding LoRa transceiver RF communication.

Since LoRa transceivers are by definition capable of bidirectional communication, this was very useful for adding simple but essential features such as retransmission of data in case e.g. part of some image or telemetry data is missing. Other interesting things one can do with bidirectional transmission include controlling individual balloons, and having them transmit or relay information between balloons.

A tricky thing which [Dave] describes in the blog post is making sure that both ends of the connection are actually listening using timing settings. The use of encryption is also strongly recommended, unless you want to risk someone hijacking your balloons. This has now all been implemented in the HAB Explora app for Android, as well as the application for Windows.

Header image: Antonino Vara, CC BY 4.0.

Hamster Goes on Virtual Journey

Hamsters are great pets, especially for those with limited space or other resources. They are fun playful animals that are fairly easy to keep, and are entertaining to boot. [Kim]’s hamster, [Mr. Fluffbutt], certainly fits this mold as well but [Kim] wanted something a little beyond the confines of the habitat and exercise wheel and decided to send him on a virtual journey every time he goes for a run.

The virtual hamster journey is built on an ESP32 microcontroller which monitors the revolutions of the hamster wheel via a hall effect sensor and magnet. It then extrapolates the distance the hamster has run and sends the data to a Raspberry Pi which hosts a MQTT and Node.js server. From there, it maps out an equivalent route according to a predefined GPX route and updates that information live. The hamster follows the route, in effect, every time it runs on the wheel. [Mr Fluffbutt] has made it from the Netherlands to southeastern Germany so far, well on his way to his ancestral home of Syria.

This project is a great way to add a sort of augmented reality to a pet hamster, in a similar way that we’ve seen self-driving fish tanks. Adding a Google Streetview monitor to the hamster habitat would be an interesting addition as well, but for now we’re satisfied seeing the incredible journey that [Mr Fluffbutt] has been on so far.

Security Affairs

Retrieved title: Security Affairs, 3 item(s)
A threat actor exploited 11 zero-day flaws in 2020 campaigns

A hacking group has employed at least 11 zero-day flaws as part of an operation that took place in 2020 and targeted Android, iOS, and Windows users.

Google’s Project Zero security team published a report about the activity of a mysterious hacking group that operated over the course of 2020 and exploited at least 11 zero-day vulnerabilities in its attacks on Android, iOS, and Windows users.

zero-day attacks

Google researchers observed two separate waves of attacks that took place in February and October 2020, respectively. Threat actors set up malicious sites in a series of watering hole attacks that were redirecting visitors to exploit servers hosting exploit chains for Android, Windows, and iOS devices.

“In October 2020, Google Project Zero discovered seven 0-day exploits being actively used in-the-wild. These exploits were delivered via “watering hole” attacks in a handful of websites pointing to two exploit servers that hosted exploit chains for Android, Windows, and iOS devices.” wrote the popular Project Zero researcher Maddie Stone. “These attacks appear to be the next iteration of the campaign discovered in February 2020 and documented in this blog post series.”

Since February 2020, the same hacking group set up at least a couple dozen websites in its attacks, experts noticed that the threat actors relied on both zero-day vulnerabilities and known flaws.

Nonetheless, the threat actor behind the attacks also showed the ability to replace zero-days on the fly once one was detected and patched by software vendors.

Below the exploits that were delivered based on the device and browser in the last wave of attacks:

Exploit ServerPlatformBrowserRenderer RCESandbox EscapeLocal Privilege Escalation
1iOSSafariStack R/W via Type 1 Fonts (CVE-2020-27930)Not neededInfo leak via mach message trailers (CVE-2020-27950)Type confusion with turnstiles (CVE-2020-27932)
1WindowsChromeFreetype heap buffer overflow(CVE-2020-15999)Not neededcng.sys heap buffer overflow (CVE-2020-17087)
1Android** Note: This was only delivered after #2 went down and CVE-2020-15999 was patched.ChromeV8 type confusion in TurboFan (CVE-2020-16009)UnknownUnknown
2AndroidChromeFreetype heap buffer overflow(CVE-2020-15999)Chrome for Android head buffer overflow (CVE-2020-16010)Unknown
2AndroidSamsung BrowserFreetype heap buffer overflow(CVE-2020-15999)Chromium n-dayUnknown

Below the list of zero-day flaws exploited in the February 2020 campaign:

while the zero-day flaws exploited in the October 2020 attacks are:

At the time of this writing, Google has yet to attribute these campaigns to any specific threat actor and it is still unclear if the attacks have been conducted by a nation-state actor.

“The vulnerabilities cover a fairly broad spectrum of issues – from a modern JIT vulnerability to a large cache of font bugs. Overall each of the exploits themselves showed an expert understanding of exploit development and the vulnerability being exploited. In the case of the Chrome Freetype 0-day, the exploitation method was novel to Project Zero.” concludes the post. “Project Zero closed out 2020 with lots of long days analyzing lots of 0-day exploit chains and seven 0-day exploits. When combined with their earlier 2020 operation, the actor used at least 11 0-days in less than a year.”

If you want to receive the weekly Security Affairs Newsletter for free subscribe here.

Follow me on Twitter: @securityaffairs and Facebook

Pierluigi Paganini

(SecurityAffairs – hacking, zero-day)

The post A threat actor exploited 11 zero-day flaws in 2020 campaigns appeared first on Security Affairs.

REvil ransomware gang hacked Acer and is demanding a $50 million ransom

Taiwanese multinational hardware and electronics corporation Acer was victim of a REvil ransomware attack, the gang demanded a $50,000,000 ransom.

Taiwanese computer giant Acer was victim of the REvil ransomware attack, the gang is demanding the payment of a $50,000,000 ransom, the largest one to date.

Acer is the world’s 6th-largest PC vendor by unit sales as of January 2021, it has more than 7,000 employees (2019) and in 2019 declared 234.29 billion in revenue.

The ransomware gang claimed to have stolen data from the systems of the vendor before encrypting them, then published on their data leak site some images of allegedly stolen documents (i.e. financial spreadsheets, bank documents and communications) as proof of the hack.

Acer is currently investigating the security breach.

“Acer routinely monitors its IT systems, and most cyberattacks are well defensed. Companies like us are constantly under attack, and we have reported recent abnormal situations observed to the relevant law enforcement and data protection authorities in multiple countries.” reads a statement issued by the company. “”We have been continuously enhancing our cybersecurity infrastructure to protect business continuity and our information integrity. We urge all companies and organizations to adhere to cyber security disciplines and best practices, and be vigilant to any network activity abnormalities.”

A REvil ransomware sample on malware analysis site Hatching Triage was discovered by TechTarget sister publication LeMagIT Friday, which contained a link to a REvil ransomware demand for $50 million in Monero (213,151 XMR as of publishing).

Researchers at LegMagIT while investigating the security breach discovered a REvil ransomware sample employed in the attacks on Acer, it includes a link to a REvil ransomware demand for $50 million worth of Monero.

“We have since found a sample of the Revil / Sodinokibi ransomware that leads to an engaged discussion between victim and attacker. The latter start by providing a link that leads to their blog page… devoted to Acer. Conservation started on March 14.” reported LegMagIT. “Cyber ​​criminals have offered a 20% discount on the requested amount, provided the settlement reaches them by March 17. Currently, they are asking for $ 50 million. Their interlocutor proposed $ 10 million. The attackers are leaving Acer until March 28 to meet their demands or find an arrangement. After this deadline, they will demand $ 100 million. “

REvil ransomware operators offered a 20% discount if payment was completed by this week, on Wednesday.

acer revil half_column_desktop
Source LeMagIT

According to BleepingComputer, the popular malware researcher Vitali Kremez shared evidence with its experts that one of the affiliates to the Revil RaaS recently targeted an Acer’s Microsoft Exchange server.

If you want to receive the weekly Security Affairs Newsletter for free subscribe here.

Follow me on Twitter: @securityaffairs and Facebook

Pierluigi Paganini

(SecurityAffairs – hacking, ransoware)

The post REvil ransomware gang hacked Acer and is demanding a $50 million ransom appeared first on Security Affairs.

Russian National pleads guilty to conspiracy to plant malware on Tesla systems

The Russian national who attempted to convince a Tesla employee to plant malware on Tesla systems has pleaded guilty.

The U.S. Justice Department announced on Thursday that the Russian national Egor Igorevich Kriuchkov (27), who attempted to convince a Tesla employee to install malware on the company’s computers, has pleaded guilty.

“A Russian national pleaded guilty in federal court today for conspiring to travel to the United States to recruit an employee of a Nevada company into a scheme to introduce malicious software into the company’s computer network.” read a press release published by the DoJ.

In September Kriuchkov has been indicted in the United States for conspiring to recruit a Tesla employee to install malware onto the company’s network.

The man was arrested on August 22 and appeared in court on August 24. Kriuchkov offered $1 million to the unfaithful employee of the US company.

Kriuchkov conspired with other criminals to recruit the employee of an unnamed company in Nevada. At the end of August, Elon Musk confirmed that Russian hackers attempted to recruit an employee to install malware into the network of electric car maker Tesla.

Teslarati confirmed that the employee contacted by the crooks is a Russian-speaking, non-US citizen working at Tesla-owned lithium-ion battery and electric vehicle subassembly factory Giga Nevada.

The Russian man and his co-conspirators were planning to exfiltrate data from the network of the company and blackmail the organization to leak stolen data, unless the company paid a ransom demand.

A few days after meeting the employee, Kriuchkov exposed his plan to the employee offering him between $500,000 and $1,000,000 for the dirty job. The malware would provide Kriuchkov and co-conspirators, the malicious code was specifically designed to steal information from Tesla.

The employee decided to warn Tesla and the company reported the attempt to the FBI. The employee had more meetings with Kriuchkov that were surveilled by the FBI. On August 22, the FBI arrested Kriuchkov.

“The swift response of the company and the FBI prevented a major exfiltration of the victim company’s data and stopped the extortion scheme at its inception,” said Acting Assistant Attorney General Nicholas L. McQuaid of the Justice Department’s Criminal Division. “This case highlights the importance of companies coming forward to law enforcement, and the positive results when they do so.”

“This case highlights our office’s commitment to protecting trade secrets and other confidential information belonging to U.S. businesses — which is becoming even more important each day as Nevada evolves into a center for technological innovation,” said Acting U.S. Attorney Christopher Chiou for the District of Nevada. “Along with our law enforcement partners, we will continue to prioritize stopping cybercriminals from harming American companies and consumers.”

“This is an excellent example of community outreach resulting in strong partnerships, which led to proactive law enforcement action before any damage could occur,” said Special Agent in Charge Aaron C. Rouse of the FBI’s Las Vegas Field Office.

Kriuchkov will be sentenced on May 10.

If you want to receive the weekly Security Affairs Newsletter for free subscribe here.

Follow me on Twitter: @securityaffairs and Facebook

Pierluigi Paganini

(SecurityAffairs – hacking, Tesla)

The post Russian National pleads guilty to conspiracy to plant malware on Tesla systems appeared first on Security Affairs.

SecurityWeek

Retrieved title: SecurityWeek RSS Feed, 3 item(s)
Microsoft Open-Sources 'CyberBattleSim' Enterprise Environment Simulator

Microsoft this week announced the open source availability of Python code for “CyberBattleSim,” a research toolkit that supports simulating complex computer systems.

read more

CISA Releases Tool to Detect Microsoft 365 Compromise

The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has released a new tool to help with the detection of potential compromise within Microsoft Azure and Microsoft 365 environments.

read more

Security Automation Firm Tines Raises $26 Million at $300 Million Valuation

Tines, an Ireland-based company that provides no-code automation solutions for security and operations teams, on Thursday announced that it has raised $26 million in a Series B funding round, at a valuation of $300 million.

read more