feeds

by zer0x0ne — on

cover-image

some of my favourite websites: portswigger almost secure dark reading packet storm xkcd



xkcd

Retrieved title: xkcd.com, 3 item(s)
Call My Cell

'Hey, can you call my cell?' '...I'm trying, but it says this number is blocked?' 'Ok, thanks, just checking.'

Goodhart's Law

[later] I'm pleased to report we're now identifying and replacing hundreds of outdated metrics per hour.

Orbital Argument

"Some people say light is waves, and some say it's particles, so I bet light is some in-between thing that's both wave and particle depending on how you look at it. Am I right?" "YES, BUT YOU SHOULDN'T BE!"

PortSwigger Research

Retrieved title: PortSwigger Research, 6 item(s)
Top 10 web hacking techniques of 2023

Welcome to the Top 10 Web Hacking Techniques of 2023, the 17th edition of our annual community-powered effort to identify the most innovative must-read web security research published in the last year

Hiding payloads in Java source code strings

In this post we'll show you how Java handles unicode escapes in source code strings in a way you might find surprising - and how you can abuse them to conceal payloads. We recently released a powerful

Top 10 web hacking techniques of 2023 - nominations open

Update: The results are in! Check out the final top ten here or scroll down to view all nominations Over the last year, numerous security researchers have shared their discoveries with the community t

Finding that one weird endpoint, with Bambdas

Security research involves a lot of failure. It's a perpetual balancing act between taking small steps with a predictable but boring outcome, and trying out wild concepts that are so crazy they might

Blind CSS Exfiltration: exfiltrate unknown web pages

This is a gif of the exfiltration process (We've increased the speed so you're not waiting around for 1 minute). Read on to discover how this works... CSS Cafe presentation I presented this technique

The single-packet attack: making remote race-conditions 'local'

The single-packet attack is a new technique for triggering web race conditions. It works by completing multiple HTTP/2 requests with a single TCP packet, which effectively eliminates network jitter an

Dark Reading

Retrieved title: darkreading, 6 item(s)
Microsoft Zero Day Used by Lazarus in Rootkit Attack

North Korean state actors Lazarus Group used a Windows AppLocker zero day, along with a new and improved rootkit, in a recent cyberattack, researchers report.

FBI, CISA Release IoCs for Phobos Ransomware

Threat actors using the malware have infected systems within government, healthcare, and other critical infrastructure organizations since at least 2019.

Chinese APT Developing Exploits to Defeat Already Patched Ivanti Users

More bad news for Ivanti customers: soon, even if you've patched, you still might not be safe from relentless attacks from high-level Chinese threat actors.

Biden Administration Unveils Data Privacy Executive Order

The presidential move orders a variety of different departments and organizations to regulate personal data better and provide clear, high standards to prevent foreign access.

Troutman Pepper Forms Incidents and Investigations Team

Tenable Introduces Visibility Across IT, OT, and IoT Domains

Almost Secure

Retrieved title: Almost Secure, 3 item(s)
Implementing a “Share on Mastodon” button for a blog

I decided that I would make it easier for people to share my articles on social media, most importantly on Mastodon. However, my Hugo theme didn’t support showing a “Share on Mastodon” button yet. It wasn’t entirely trivial to add support either: unlike with centralized solutions like Facebook where a simple link is sufficient, here one would need to choose their home instance first.

As far as existing solutions go, the only reasonably sophisticated approach appears to be Share₂Fedi. It works nicely, privacy-wise one could do better however. So I ended up implementing my own solution while also generalizing that solution to support a variety of different Fediverse applications in addition to Mastodon.

Screenshot of a web page titled “Share on Fediverse”

Why not Share₂Fedi?

If all you want is quickly adding a “Share on Fediverse” button, Share₂Fedi is really the simplest solution. It only requires a link, just like your typical share button. You link to the Share₂Fedi website, passing the text to be shared as a query parameter. The user will be shown an interstitial page there, allowing them to select a Fediverse instance. After submitting the form they will be redirected to the Fediverse instance in question for the final confirmation.

Unfortunately, the privacy aspect of this solution isn’t quite optimal. Rather than having all the processing happen on the client side, Share₂Fedi relies on server-side processing. This means that your data is being stored in the server logs at the very least. This data being the address and title of the article being shared, it isn’t terribly sensitive. Yet why send any data to a third party when you could send none?

I was told that Share₂Fedi was implemented in this way in order to work even with client-side JavaScript disabled. Which is a fair point but not terribly convincing seeing how your typical social media website won’t work without JavaScript.

But it is possible to self-host Share₂Fedi of course. It is merely something I’d rather avoid. See, this blog is built with the Hugo static site generator, and there is very little server-side functionality here. I’d rather keep it this way.

Share on Mastodon or on Fediverse?

Originally, I meant to add buttons for individual applications. First a “Share on Mastodon” button. Then maybe “Share on Lemmy.” Also “Share on Kbin.” Oh, maybe also Friendica. Wait, Misskey exposes the same sharing endpoint as Mastodon?

I realized that this approach doesn’t really scale, with the Fediverse consisting of many different applications. Supporting the applications beyond Mastodon is trivial, but adding individual buttons for each application would create a mess.

So maybe have a “Share on Fediverse” button instead of “Share on Mastodon”? Users have to select an instance anyway, and the right course of action can be determined based on the type of this instance. There is a Fediverse logo as well.

Only concern: few people know the Fediverse logo so far, way fewer than the people recognizing the Mastodon logo. So I decided to show both “Share on Mastodon” and “Share on Fediverse” buttons. When clicked, both lead to the exact same page.

A black-and-white version of the Mastodon logo next to a similarly black-and-white version of the Fediverse logo

And that page would choose the right endpoint based on the selected instance. Here are the endpoints for individual Fediverse applications (mostly taken over from Share₂Fedi, some additions by me):

{
  "calckey": "share?text={text}",
  "diaspora": "bookmarklet?title={title}&notes={description}&url={url}",
  "fedibird": "share?text={text}",
  "firefish": "share?text={text}",
  "foundkey": "share?text={text}",
  "friendica": "compose?title={title}&body={description}%0A{url}",
  "glitchcafe": "share?text={text}",
  "gnusocial": "notice/new?status_textarea={text}",
  "hometown": "share?text={text}",
  "hubzilla": "rpost?title={title}&body={description}%0A{url}",
  "kbin": "new/link?url={url}",
  "mastodon": "share?text={text}",
  "meisskey": "share?text={text}",
  "microdotblog": "post?text=[{title}]({url})%0A%0A{description}",
  "misskey": "share?text={text}"
}

Note: From what I can tell, Lemmy and Pleroma don’t have an endpoint which could be used.

What to share?

Share₂Fedi assumes that all Fediverse applications accept unstructured text. So that’s the default for my solution as well: a text consisting of the article’s title, description and address.

When it comes to the Fediverse, one size does not fit all however. Some applications like Diaspora expect more structured input. Micro.blog on the other hand expects Markdown input, special markup is required for a link to be displayed. And Kbin has the most exotic solution: it accepts only the article’s address, all other article metadata is then retrieved automatically.

So I resorted to displaying all the individual fields on the intermediate sharing page:

A form titled “Share on Fediverse” with pre-filled fields Post title, Description and Link. The “Fediverse instance” field is focused and empty.

These fields are pre-filled and cannot be edited. After all, what good would editing these fields do if some of them might be thrown away or mashed together in the next step? So editing the text is delegated to the Fediverse instance, and this page is only about choosing an instance.

Trouble determining the Fediverse application

So, in order to choose the right endpoint, one has to know what Fediverse application powers the selected instance. Luckily, that’s easy. First, one downloads the .well-known/nodeinfo file of the instance. Here is the one for infosec.exchange:

{
  "links": [
    {
      "rel": "http://nodeinfo.diaspora.software/ns/schema/2.0",
      "href": "https://infosec.exchange/nodeinfo/2.0"
    }
  ]
}

We need the link marked with rel value http://nodeinfo.diaspora.software/ns/schema/2.0. Next we download this one and get:

{
  "version": "2.0",
  "software": {
    "name": "mastodon",
    "version": "4.3.0-alpha.0+glitch"
  },
  "protocols": ["activitypub"],
  "services": {
    "outbound": [],
    "inbound": []
  },
  "usage": {
    "users": {
      "total": 60802,
      "activeMonth": 17803,
      "activeHalfyear": 33565
    },
    "localPosts": 2081420
  },
  "openRegistrations": true,
  "metadata": {}
}

There it is: software name is identified as mastodon, so we know to use the share?text= endpoint.

The catch is: when I tried implementing this check, most Fediverse applications didn’t have consistent CORS headers on their node info responses. And this means that third-party websites (like my blog) would be allowed to request these endpoints but wouldn’t get access to the response. So no software name for me.

Now obviously it shouldn’t be like this, allowing third-party websites to access the node info is very much desirable. And most Fediverse applications being open source software, I fixed this issue for Mastodon, Diaspora, Friendica and Kbin. GNU Social, Misskey, Lemmy, Pleroma, Pixelfed and Peertube already had it working correctly.

But the issue remains: it will take quite some time until we can expect node info downloads to work reliably. One could use a CORS proxy of course, but it would run contrary to the goal of not relying on third parties. Or use a self-hosted CORS proxy, but that’s again adding server-side functionality.

I went with another solution. The Fediverse Observer website offers an API that allows querying its list of Fediverse instances. For example, the following query downloads information on all instances it knows.

{nodes(softwarename: ""){
  softwarename
  domain
  score
  active_users_monthly
}}

Unfortunately, it doesn’t have meaningful filtering capabilities. So I have to filter it after downloading: I only keep the servers with an uptime score above 90 and at least 10 active users in the past month. This results in a list of roughly 2200 instances, meaning 160 KiB uncompressed – a reasonable size IMHO, especially compared to the 5.5 MiB of the unfiltered list.

For my blog, Hugo will download this list when building the static site and incorporate it into the sharing page. So for most Fediverse instances, the page will already know what software is running on it. And if it doesn’t know an instance? Fall back to downloading the node info. And if that fails as well, just assume that it’s Mastodon.

Is this a perfect solution? Certainly not. Is it good enough? Yes, I think so. And we need that list of Fediverse instances anyway, for autocomplete functionality on the instance field.

The complete code

This solution is now part of the MemE theme for Hugo, see the corresponding commit. components/post-share.html partial is where the buttons are being displayed. These link to the fedishare.html page and pass various parameters via the anchor part of the URL (not the query string so that these aren’t being saved to server logs).

The fedishare.html page is stored under assets. That’s because having a template turned into a static page would otherwise not happen by default and require additional changes to the configuration file. But that asset loads the fedishare.html partial where the actual logic is located.

Building that page involves querying the Fediverse Observer API and filtering the response. Websites that are built too frequently can set up Hugo’s cache to avoid hitting the network every time.

The resulting list is put into a <datalist> element, used for autocomplete on the instance field. The same list is also being used by the getSoftwareName() function in the fedishare.js asset, the script powering the page. Fallback for unknown instances is retrieving node info, and fallback here is just assuming Mastodon.

Once this chooses some Fediverse application, the script will take the corresponding endpoint, replace the placeholders by actual values and trigger navigation to that address.

A year after the disastrous breach, LastPass has not improved

In September last year, a breach at LastPass’ parent company GoTo (formerly LogMeIn) culminated in attackers siphoning out all data from their servers. The criticism from the security community has been massive. This was not so much because of the breach itself, such things happen, but because of the many obvious ways in which LastPass made matters worse: taking months to notify users, failing to provide useful mitigation instructions, downplaying the severity of the attack, ignoring technical issues which have been publicized years ago and made the attackers’ job much easier. The list goes on.

Now this has been almost a year ago. LastPass promised to improve, both as far as their communication goes and on the technical side of things. So let’s take a look at whether they managed to deliver.

TL;DR: They didn’t. So far I failed to find evidence of any improvements whatsoever.

Update (2023-09-26): It looks like at least the issues listed under “Secure settings” are finally going to be addressed.

A very battered ship with torn sails in a stormy sea, on its side the ship’s name: LastPass

The communication

The initial advisory

LastPass’ initial communication around the breach has been nothing short of a disaster. It happened more than three months after the users’ data was extracted from LastPass servers. Yet rather than taking responsibility and helping affected users, their PR statement was designed to downplay and to shift blame. For example, it talked a lot about LastPass’ secure default settings but failed to mention that LastPass never really enforced those. In fact, people who created their accounts a while ago and used very outdated (insecure) settings never saw as much as a warning.

The statement concluded with “There are no recommended actions that you need to take at this time.” I called this phrase “gross negligence” back when I initially wrote about it, and I still stand by this assessment.

The detailed advisory

It took LastPass another two months of strict radio silence to publish a more detailed advisory. That’s where we finally learned some more about the breach. We also learned that business customers using Federated Login are very much affected by the breach, the previous advisory explicitly denied that.

But even now, we learn that indirectly, in recommendation 9 out of 10 for LastPass’ business customers. It seems that LastPass considered generic stuff like advice on protecting against phishing attacks more important than mitigation of their breach. And then the recommendation didn’t actually say “You are in danger. Rotate K2 ASAP.” Instead, it said “If, based on your security posture or risk tolerance, you decide to rotate the K1 and K2 split knowledge components…” That’s the conclusion of a large pile of text essentially claiming that there is no risk.

At least the advisory for individual users got the priorities right. It was master password first, iterations count after that, and all the generic advice at the end.

Except: they still failed to admit the scope of the breach. The advice was:

Depending on the length and complexity of your master password and iteration count setting, you may want to reset your master password.

And this is just wrong. The breach already happened. Resetting the master password will help protect against future breaches, but it won’t help with the passwords already compromised. This advice should have really been:

Depending on the length and complexity of your master password and iteration count setting, you may want to reset all your passwords.

But this would amount to saying “we screwed up big time.” Which they definitely did. But they still wouldn’t admit it.

Improvements?

A blog post by the LastPass CEO Karin Toubba said:

I acknowledge our customers’ frustration with our inability to communicate more immediately, more clearly, and more comprehensively throughout this event. I accept the criticism and take full responsibility. We have learned a great deal and are committed to communicating more effectively going forward.

As I’ve outlined above, the detailed advisory published simultaneously with this blog post still left a lot to be desired. But this sounds like a commitment to improve. So maybe some better advice has been published in the six months which passed since then?

No, this doesn’t appear to be the case. Instead, the detailed advisory moved to the “Get Started – About LastPass” section of their support page. So it’s now considered generic advice for LastPass users. Any specific advice on mitigating the fallout of the breach, assuming that it isn’t too late already? There doesn’t seem to be any.

The LastPass blog has been publishing lots of articles again, often multiple per week. There doesn’t appear to be any useful information at all here however, only PR. To add insult to injury, LastPass published an article in July titled “How Zero Knowledge Keeps Passwords Safe.” It gives a generic overview of zero knowledge which largely doesn’t apply to LastPass. It concludes with:

For example, zero-knowledge means that no one has access to your master password for LastPass or the data stored in your LastPass vault, except you (not even LastPass).

This is bullshit. That’s not how LastPass has been designed, and I wrote about it five years ago. Other people did as well. LastPass didn’t care, otherwise this breach wouldn’t have been such a disaster.

Secure settings

The issue

LastPass likes to boast how their default settings are perfectly secure. But even assuming that this is true, what about the people who do not use their defaults? For example the people who created their LastPass account a long time ago, back when the defaults were different?

The iterations count is particularly important. Few people have heard about it, it being hidden under “Advanced Settings.” Yet when someone tries to decrypt your passwords, this value is an extremely important factor. A high value makes successful decryption much less likely.

As of 2023, the current default value is 600,000 iterations. Before the breach the default used to be 100,100 iterations, making decryption of passwords six times faster. And before 2018 it was 5,000 iterations. Before 2013 it was 500. And before 2012 the default was 1 iteration.

What happened to all the accounts which were created with the old defaults? It appears that for most of these LastPass failed to fix the settings automatically. People didn’t even receive a warning. So when the breach happened, quite a few users reported having their account configured with 1 iteration, massively weakening the protection provided by the encryption.

It’s the same with the master password. In 2018 LastPass introduced much stricter master password rules, requiring at least 12 characters. While I don’t consider length-based criteria very effective to guide users towards secure passwords, LastPass failed to enforce even this rule for existing accounts. Quite a few people first learned about the new password complexity requirement when reading about the breach.

Improvements?

I originally asked LastPass about enforcing a secure iterations count setting for existing accounts in February 2018. LastPass kept stalling until I published my research without making certain that all users are secure. And they ignored this issue another four years until the breach happened.

And while the breach prompted LastPass to increase the default iterations count, they appear to be still ignoring existing accounts. I just logged into my test account and checked the settings:

Screenshot of LastPass settings. “Password Iterations” setting is set to 5000.

There is no warning whatsoever. Only if I try to change this setting, a message pops up:

For your security, your master password iteration value must meet the LastPass minimum requirement: 600000

But people who are unaware of this setting will not be warned. And while LastPass definitely could update this setting automatically when people log in, they still choose not to do it for some reason.

It’s the same with the master password. The password of my test account is weak because this account has been created years ago. If I try to change it, I will be forced to choose a password that is at least 12 characters long. But as long as I just keep using the same password, LastPass won’t force me to change it – even though it definitely could.

There isn’t even a definitive warning when I log in. There is only this notification in the menu:

Screenshot of the LastPass menu. Security Dashboard has a red dot on its icon.

Only after clicking “Security Dashboard” will a warning message show up:

Screenshot of a LastPass message titled “Master password alert.” The message text says: “Master password strength: Weak (50%). For your protection, change your master password immediately.” Below it a red button titled “Change password.”

If this is such a critical issue that I need to change my master password immediately, why won’t LastPass just tell me to do it when I log in?

This alert message apparently pre-dates the breach, so there don’t seem to be any improvements in this area either.

Update (2023-09-26): Last week LastPass sent out an email to all users:

New master password requirements. LastPass is changing master password requirements for all users: all master passwords must meet a 12-character minimum. If your master password is less than 12-characters, you will be required to update it.

According to this email, LastPass will start enforcing stronger master passwords at some unspecified point in future. Currently, this requirement is still not being enforced, and the email does not list a timeline for this change.

More importantly, when I logged into my LastPass account after receiving this email, the iterations count finally got automatically updated to 600,000. The email does not mention any changes in this area, so it’s unclear whether this change is being applied to all LastPass accounts this time.

Brian Krebs is quoting LastPass CEO with the statement: “We have been able to determine that a small percentage of customers have items in their vaults that are corrupt and when we previously utilized automated scripts designed to re-encrypt vaults when the master password or iteration count is changed, they did not complete.” Quite frankly, I find this explanation rather doubtful.

First of all, reactions to my articles indicate that the percentage of old LastPass accounts which weren’t updated is far from small. There are lots of users finding an outdated iterations count configured in their accounts, yet only two reported their accounts being automatically updated so far.

Second: my test account in particular is unlikely to contain “corrupted items” which previously prevented the update. Back in 2018 I changed the iterations count to 100,000 and back to 5,000 manually. This worked correctly, so no corruption was present at that point. The account was virtually unused after that except for occasional logins, no data changes.

Unencrypted data

The issue

LastPass PR likes to use “secure vault” as a description of LastPass data storage. This implies that all data is secured (encrypted) and cannot be retrieved without the knowledge of the master password. But that’s not the case with LastPass.

LastPass encrypts passwords, user names and a few other pieces of data. Everything else is unencrypted, in particular website addresses and metadata. That’s a very bad idea, as security researchers kept pointing out again and again. In November 2015 (page 67). In January 2017. In July 2018. And there are probably more.

LastPass kept ignoring this issue. So when last year their data leaked, the attackers gained not only encrypted passwords but also plenty of plaintext data. Which LastPass was forced to admit but once again downplayed by claiming website addresses not to be sensitive data. And users were rightfully outraged.

Improvements?

Today I logged into my LastPass account and then opened https://lastpass.com/getaccts.php. This gives you the XML representation of your LastPass data as it is stored on the server. And I fail to see any improvements compared to this data one year ago. I gave LastPass the benefit of the doubt and created a new account record. Still:

<account url="68747470733a2f2f746573742e6465" last_modified="1693940903" >

The data in the url field is merely hex-encoded which can be easily translated back into https://test.de. And the last_modified field is a Unix timestamp, no encryption here either.

Conclusions

A year after the breach, LastPass still hasn’t provided their customers with useful instructions on mitigating the breach, nor has it admitted the full scope of the breach. They also failed to address any of the long-standing issues that security researchers have been warning about for years. At the time of writing, owners of older LastPass accounts still have to become active on their own in order to fix outdated security settings. And much of LastPass data isn’t being encrypted.

I honestly cannot explain LastPass’ denial to fix security settings of existing accounts. Back when I was nagging them about it, they straight out lied to me. Don’t they have any senior engineers on staff, so that nobody can implement this change? Do they really not care as long as they can blame the users for not updating their settings? Beats me.

As to not encrypting all the data, I am starting to suspect that LastPass actually wants visibility into your data. Do they need to know which websites you have accounts on in order to guide some business decisions? Or are they making additional income by selling this data? I don’t know, but LastPass persistently ignoring this issue makes me suspicious.

Either way, it seems that LastPass considers the matter of their breach closed. They published their advisory in March this year, and that’s it. Supposedly, they improved the security of their infrastructure, which nobody can verify of course. There is nothing else coming, no more communication and no technical improvements. Now they will only be publishing more lies about “zero knowledge architecture.”

Chrome Sync privacy is still very bad

Five years ago I wrote an article about the shortcomings of Chrome Sync (as well as a minor issue with Firefox Sync). Now Chrome Sync has seen many improvements since then. So time seems right for me to revisit it and to see whether it respects your privacy now.

Spoiler: No, it doesn’t. It improved, but that’s an improvement from outright horrible to merely very bad. The good news: today you can use Chrome Sync in a way that preserves your privacy. Google however isn’t interested in helping you figure out how to do it.

The default flow

Chrome Sync isn’t some obscure feature of Google Chrome. In fact, as of Chrome 116 setting up sync is part of the suggested setup when you first install the browser:

Screenshot of Chrome’s welcome screen with the text “Sign in and turn on sync to get your bookmarks, passwords and more on all devices. Your Chrome, Everywhere” and the highlighted button saying “Continue.”

Clicking “Continue” will ask you to log into your Google account after which you are suggested to turn on sync:

A prompt titled “Turn on sync.” The text below says: “You can always choose what to sync in settings. Google may personalize Search and other services based on your history.” The prompt has the buttons Settings, Cancel and (highlighted) Yes, I’m in.

Did you click the suggested “Yes, I’m in” button here? Then you’ve already lost. You just allowed Chrome to upload your data to Google servers, without any encryption. Your passwords, browsing history, bookmarks, open tabs? They are no longer yours only, you allowed Google to access them. Didn’t you notice the “Google may personalize Search and other services based on your history” text in the prompt?

In case you have any doubts, this setting (which is off by default) gets turned on when you click “Yes, I’m in”:

Screenshot of Chrome’s setting titled “Make searches and browsing better” with the explanation text “Sends URLs of pages you visit to Google.” The setting is turned on.

Yes, Google is definitely watching over your shoulder now.

The privacy-preserving flow

Now there is a way for you to use Chrome Sync and keep your privacy. In the prompt above, you should have clicked “Settings.” Which would have given you this page:

A message saying “Setup in progress” along with buttons “Cancel” and “Confirm.” Below it Chrome settings, featuring “Sync” and “Other services” sections.

Do you see what you need to do here before confirming? Anyone? Right, “Make searches and browsing better” option has already been turned on and needs to be switched off. But that isn’t the main issue.

“Encryption options” is what you need to look into. Don’t trust the claim that Chrome is encrypting your data, expand this section.

The selected option says “Encrypt synced passwords with your Google Account.” The other option is “Encrypt synced data with your own sync passphrase. This doesn't include payment methods and addresses from Google Pay.”

That default option sounds sorta nice, right? What it means however is: “Whatever encryption there might be, we get to see your data whenever we want it. But you trust us not to peek, right?” The correct answer is “No” by the way, as Google is certain to monetize your browsing history at the very least. And even if you trust Google to do no evil, do you also trust your government? Because often enough Google will hand over your data to local authorities.

The right way to use Chrome Sync is to set up a passphrase here. This will make sure that most of your data is safely encrypted (payment data being a notable exception), so that neither Google nor anyone else with access to Google servers can read it.

What does Google do with your data?

Deep in Chrome’s privacy policy is a section called How Chrome handles your synced information. That’s where you get some hints towards how your data is being used. In particular:

If you don’t use your Chrome data to personalize your Google experience outside of Chrome, Google will only use your Chrome data after it’s anonymized and aggregated with data from other users.

So Google will use the data for personalization. But even if you opt out of this personalization, they will still use your “anonymized and aggregated” data. As seen before, promises to anonymize and aggregate data cannot necessarily be trusted. Even if Google is serious about this, proper anonymization is difficult to achieve.

So how do you make sure that Google doesn’t use your data at all?

If you would like to use Google’s cloud to store and sync your Chrome data but you don’t want Google to access the data, you can encrypt your synced Chrome data with your own sync passphrase.

Yes, sync passphrase it is. This phrase is the closest thing I could find towards endorsing sync passphrases, hidden in a document that almost nobody reads.

This makes perfect sense of course. Google has no interest in helping you protect your data. They rather want you to share your data with them, so that Google can profit off it.

It could have been worse

Yes, it could have been worse. In fact, it was worse.

Chrome Sync used to enable immediately when you signed into Chrome, without any further action from you. It also used to upload your data unencrypted before you had a chance to change the settings. Besides, the sync passphrase would only result in passwords being encrypted and none of the other data. And there used to be a warning scaring people away from setting a sync passphrase because it wouldn’t allow Google to display your passwords online. And the encryption was horribly misimplemented.

If you look at it this way, there have been considerable improvements to Chrome Sync over the past five years. But it still isn’t resembling a service meant to respect users’ privacy. That’s by design of course: Google really doesn’t want you to use effective protection for your data. That data is their profits.

Comparison to Firefox Sync

I suspect that people skimming my previous article on the topic took away from it something like “both Chrome Sync and Firefox Sync have issues, but Chrome fixed theirs.” Nothing could be further from the truth.

While Chrome did improve, they are still nowhere close to where Firefox Sync started off. Thing is: Firefox Sync was built with privacy in mind. It was encrypting all data from the very start, by default. Mozilla’s goal was never monetizing this data.

Google on the other hand built a sync service that allowed them to collect all of users’ data, with a tiny encryption shim on top of it. Outside pressure seems to have forced them to make Chrome Sync encryption actually usable. But they really don’t want you to use this, and their user interface design makes that very clear.

Given that, the Firefox Sync issue I pointed out is comparably minor. It isn’t great that five years weren’t enough to address it. This isn’t a reason to discourage people from using Firefox Sync however.

Packet Storm

Retrieved title: News ≈ Packet Storm, 6 item(s)
Ubiquiti Router Users Urged To Secure Devices Targeted By Russian Hackers

Windows Zero Day Exploited By North Korean Hackers In Rootkit Attack

Meta Patches Facebook Account Takeover Vulnerability

Iranian Hackers Target Aviation And Defense Sectors In Middle East

GitHub Besieged By Millions Of Malicious Repositories In Ongoing Attack

Australian Spy Chief Fears Sabotage Of Critical Infrastructure