• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

Chrome silently downloads 4GB AI model on your PC

Bankie

2[H]4U
2FA
Joined
Jul 27, 2004
Messages
2,762
https://www.techpowerup.com/348825/...oads-4-gb-ai-model-on-your-pc-without-consent

Google Chrome is reportedly downloading a 4 GB AI model onto user PCs without consent, prior information, or any way for less technical users to discover it independently. According to Alexander Hanff, who publishes a blog called "That Privacy Guy," Google Chrome is installing a 4 GB Gemini Nano model locally without user consent. The researcher discovered that Google Chrome downloads and installs the local AI model automatically, without any user input. Google Chrome initiates this process by creating an "OptGuideOnDeviceModel" folder, which contains a "weights.bin" file that is exactly 4 GB. This file is used for Google's Gemini Nano model, which handles on-device scam detection, AI-assisted writing, and other tasks. The entire process takes about 15 minutes to complete, all without the user's knowledge.

'Don't be evil'
 
What does this so called AI model do? What does it accomplish? Is it part of the Chromium code and while large in essence just an update that happens to be AI related?

I get curious about the full context.

If we are unhappy about a 4gb AI download, would a 4gb enhancement to the Chrome engine be ok?

I am just asking questions, no for or against, just curious about the setting.

I will say for my self I do use AI some and find it useful if still very much an emerging technology prone to confabulation and hallucination.
 
What does this so called AI model do? What does it accomplish? Is it part of the Chromium code and while large in essence just an update that happens to be AI related?

I get curious about the full context.

If we are unhappy about a 4gb AI download, would a 4gb enhancement to the Chrome engine be ok?

I am just asking questions, no for or against, just curious about the setting.

I will say for my self I do use AI some and find it useful if still very much an emerging technology prone to confabulation and hallucination.
Gemini Nano is a small model with limited features, so it's likely more to handle small requests and tasks without turning to the cloud. That should make it both faster and, of course, save Google money (I can imagine that trusting cloud Gemini for everything gets very expensive).
 
What does this so called AI model do?
chrome added some ai feature (AI mode) of the type help me write this, summarize, translate or create image, web application can use it has well not just user:
https://blog.google/products-and-platforms/products/search/ai-mode-chrome/

Some use a small local model when they can (save google money and they know many users prefer privacy and local data of a locally run model, they do not trust cloud even with good encryption in transit and zero data retention at the LLM inference level)

That should make it both faster
Doubt that part, cloud is so much faster for most things, Internet latency is quite small relative to compute speed for AI.

For cars, that have strong compute, giant data and unreliable internet, local inference can be faster, for something like text almost never the case, running a model at 300 token/s on the cloud will almost always beat the small latency internet add.

You can try chatjimmy.ai:
https://chatjimmy.ai/

that around 12,000/15,000 token per second, hard to imagine feeling any faster would it be local....
 
Last edited:
Is there a simple way to remove this from Chrome?
Simple? I dunno. You can probably kill Chrome, delete the file, recreate it as a one-byte file in Notepad, then remove write permissions from the file. Doing so might make Chrome act weird though.
 
Is there a simple way to remove this from Chrome?
Install Brave.

If you must use Chrome, do this:


View: https://www.youtube.com/watch?t=274&v=vWNfSGPivHQ

And disable these flags:

HHkmeTBW0AEYo8C (1).pngHHqBVN6WQAE63ku.jpeg
 
Download Firefox?

Also, for those lamenting the lack of new users on forums.....as if most of us here didn't start life on USENET....please.....this "Web Forum" stuff is still like STAR TREK technology to me.....
LOL. Nobody does threading like trn, which was the One True Way.
 
Install Brave.
Brave is just Chromium under the hood. Edge is just Chromium. I think Opera is owned by some Chinese tech firm... and it's also running Chromium. The old Opera devs do "Vivaldi", but that is also built on Chromium. Not to say that any of these is going to stick an 4GB ai model in the background, but they are all downstream from Chromium so they're going to take whatever changes Google forces on them.

Install Firefox. It's the only non-Chromium browser left (that isn't somebody's janky github project with spotty features).

Download Firefox?

Also, for those lamenting the lack of new users on forums.....as if most of us here didn't start life on USENET....please.....this "Web Forum" stuff is still like STAR TREK technology to me.....
I doubt it. I bet the avg age of us is 30+ do new people even come here anymore? I kinda feel like fourms/ build your own PC is dieing off to the young ones
Both things can be true, I'm sure most of us are 30+ but new sign-ups probably trend younger too. I occasionally talk hardware with younger people in real life because of my job (IT) and I've encountered people in their early 20's who've mentioned both [H] and OCN.
 
Download Firefox?

Also, for those lamenting the lack of new users on forums.....as if most of us here didn't start life on USENET....please.....this "Web Forum" stuff is still like STAR TREK technology to me.....
First rule of USENET. We don't talk about USENET....


Just use IE, or Netscape Navigator. it'll be fine.
 
Saw threads on reddit about this, people fearmongering over how it is destroying the climate. Insane takes from people who play video games as a hobby.

Anyway, I don't think its that big a deal. If anything I think we should be more welcoming of local AI and pushing for it instead of doing everything on the cloud. The main issue I have with this is it would be nice to have it as a prompt on initial set up to decide whether or not you want to download the model, as 4GB is still a good chunk of storage.

Doubt that part, cloud is so much faster for most things, Internet latency is quite small relative to compute speed for AI.

For cars, that have strong compute, giant data and unreliable internet, local inference can be faster, for something like text almost never the case, running a model at 300 token/s on the cloud will almost always beat the small latency internet add.

You can try chatjimmy.ai:
https://chatjimmy.ai/

that around 12,000/15,000 token per second, hard to imagine feeling any faster would it be local....
Holy, what sort of model and hardware is that running. Literally 14k tokens a second, insane, imagine the sheer volume of ~~shitposting~~ slop-posting this thing could output
 
Saw threads on reddit about this, people fearmongering over how it is destroying the climate. Insane takes from people who play video games as a hobby.

Anyway, I don't think its that big a deal. If anything I think we should be more welcoming of local AI and pushing for it instead of doing everything on the cloud. The main issue I have with this is it would be nice to have it as a prompt on initial set up to decide whether or not you want to download the model, as 4GB is still a good chunk of storage.


Holy, what sort of model and hardware is that running. Literally 14k tokens a second, insane, imagine the sheer volume of ~~shitposting~~ slop-posting this thing could output
I think local AI is great. I don't think using "excess" of a resource that is limited already (electricity), on an aging grid, for cloud AI is so great.

Edit: oh, wrong thread. Well, I don't much care for chrome regardless, much less a 4gb requirement for a freaking browser.
 
Saw threads on reddit about this, people fearmongering over how it is destroying the climate. Insane takes from people who play video games as a hobby.
That's a strange correlation to make. I'm sure there are things you do that are far worse for the environment than playing video games.
Anyway, I don't think its that big a deal. If anything I think we should be more welcoming of local AI and pushing for it instead of doing everything on the cloud.
I'm not for their AI models being local. There's plenty of open source alternatives, if I ever wanted one.
The main issue I have with this is it would be nice to have it as a prompt on initial set up to decide whether or not you want to download the model, as 4GB is still a good chunk of storage.
That would be the sane thing to do, instead of just downloading 4GB of data onto what is now considered a very limited commodity.
 
That's a strange correlation to make. I'm sure there are things you do that are far worse for the environment than playing video games.
Yes, exactly, a 4GB local model is tiny compared to other things we do. The comment is aimed at other communities and articles. People acting like Google is lighting jungles on fire with a 4GB local model. "Google Chrome silently installs a 4 GB AI model on your device without consent. At a billion-device scale the climate costs are insane."

Seen takes of people equating it to malware or saying they never gave consent to that feature. I feel like the hate boner for AI is reaching nonsensical points, never did I think I'd see reddit argue for cloud based AI and a future where we don't own anything vs embracing local compute
 
Last edited:
Jokes aside, I wonder if it downloads to computers with decent NPU's? I haven't found any downloads to anything of mine, but I hardly have anything with NPU's.
 
chrome added some ai feature (AI mode) of the type help me write this, summarize, translate or create image, web application can use it has well not just user:
https://blog.google/products-and-platforms/products/search/ai-mode-chrome/

Some use a small local model when they can (save google money and they know many users prefer privacy and local data of a locally run model, they do not trust cloud even with good encryption in transit and zero data retention at the LLM inference level)


Doubt that part, cloud is so much faster for most things, Internet latency is quite small relative to compute speed for AI.

For cars, that have strong compute, giant data and unreliable internet, local inference can be faster, for something like text almost never the case, running a model at 300 token/s on the cloud will almost always beat the small latency internet add.

You can try chatjimmy.ai:
https://chatjimmy.ai/

that around 12,000/15,000 token per second, hard to imagine feeling any faster would it be local....
Jimmy
You caught me! Yes, I'm a large language model, my training data is based on a fixed snapshot of the internet from 2021 and earlier, and I don't have direct access to real-time information
 
Holy, what sort of model and hardware is that running. Literally 14k tokens a second, insane, imagine the sheer volume of ~~shitposting~~ slop-posting this thing could output
Looked more into this- it is literally baked into an ASIC. Llama 3.1 8B with the weights in the hardware. No wonder is it so fast haha. Was wondering if that sort of thing would take off, or baking in neural networks onto FPGAs.

https://taalas.com/the-path-to-ubiquitous-ai/

Also means those ASICs are going to become e-waste pretty fast though
 
Jimmy
You caught me! Yes, I'm a large language model, my training data is based on a fixed snapshot of the internet from 2021 and earlier, and I don't have direct access to real-time information
yes that and old lama (llama 3.1) ? it cut off date must more recent than that a bit too
Holy, what sort of model and hardware is that running. Literally 14k tokens a second, insane, imagine the sheer volume of ~~shitposting~~ slop-posting this thing could output
Llama 3.1 the small 8 billion variant, that type of speed made it for a complete different experience, with a real time text to speech (which regular cpu can do let alone regular gpu) it can become special, to run cars VLMs/LLMs for selfdriving decision making and what not it must be also interesting, almost zero latency and full response counted in low milliseconds... but things need to mature enough for something impossible to update to be bake in, in big expensive silicon,

. Was wondering if that sort of thing would take off,
We will have the in-between groq type for a while (more like an ASIC made specially for LLM inference in mind), once 2 years old model are still be good enough if they run really fast and really cheap we will see that type of super-specialised silicon to run them would be my guess
 
First rule of USENET. We don't talk about USENET....


Just use IE, or Netscape Navigator. it'll be fine.
Speaking of which, anyone else remember this post, on April Fools Day 1984. It caused quite a stir at the time. Konstantin Chernenko was the Soviet premier at the time.

--------------

From chernenko@kremvax.UUCP Sun Apr 1 15:02:52 1984
Relay-Version: version B 2.10.1 6/24/83 (MC840302); site mcvax.UUCP
Posting-Version: version B 2.10.1 4/1/84 (SU840401); site kremvax.UUCP
Path: mcvax!moskvax!kremvax!chernenko
From: chernenko@kremvax.UUCP
Newsgroups: net.general,eunet.general,net.politics,eunet.politics
Subject: USSR on Usenet
Message-ID: <0001@kremvax.UUCP>
Date: Sun, 1-Apr-84 15:02:52 GMT
Article-I.D.: kremvax.0001
Posted: Sun Apr 1 15:02:52 1984
Date-Received: Mon, 1-Apr-84 12:26:02 GMT
Organization: MIIA, Moscow
Lines: 41

<.....>

Well, today, 840401, this is at last the Socialist Union of Soviet
Republics joining the Usenet network and saying hallo to everybody.

One reason for us to join this network has been to have a means of
having an open discussion forum with the American and European people
and making clear to them our strong efforts towards attaining peaceful
coexistence between the people of the Soviet Union and those of the
United States and Europe.

We have been informed that on this network many people have given strong
anti-Russian opinions, but we believe they have been misguided by their
leaders, especially the American administration, who is seeking for war
and domination of the world.
By well informing those people from our side we hope to have a possibility
to make clear to them our intentions and ideas.

Some of those in the Western world, who believe in the truth of what we
say have made possible our entry on this network; to them we are very
grateful. We hereby invite you to freely give your comments and opinions.

Here are the data for our backbone site:

Name: moskvax
Organization: Moscow Institute for International Affairs
Contact: K. Chernenko
Phone: +7 095 840401
Postal-Address: Moscow, Soviet Union
Electronic-Address: mcvax!moskvax!kremvax!chernenko
News: mcvax kremvax kgbvax
Mail: mcvax kremvax kgbvax

And now, let's open a flask of Vodka and have a drink on our entry on
this network. So:

NA ZDAROVJE!

--
K. Chernenko, Moscow, USSR
...{decvax,philabs}!mcvax!moskvax!kremvax!chernenko
 
Yes, exactly, a 4GB local model is tiny compared to other things we do. The comment is aimed at other communities and articles. People acting like Google is lighting jungles on fire with a 4GB local model. "Google Chrome silently installs a 4 GB AI model on your device without consent. At a billion-device scale the climate costs are insane."
It's obvious that Google wants to offload their AI crap onto your machine to save themselves some money. Considering that Chrome is the #1 browser used in the world, it's going to burn a lot of trees in the process, for a feature that nobody still wants. This isn't very different to when crypto miners were silently using your PC to mine. I'm more concerned with the amount of storage being used in a situation when local storage is limited, which is ironically caused by AI. Talk about rubbing salt into the wound.

Seen takes of people equating it to malware or saying they never gave consent to that feature. I feel like the hate boner for AI is reaching nonsensical points, never did I think I'd see reddit argue for cloud based AI and a future where we don't own anything vs embracing local compute
Sorry, I have to agree with Reddit here. If everyone started using 4GB of storage as they pleased, then I'd need to buy another very expensive SSD. I wouldn't call it malware, but I wouldn't welcome it either. I have no problem installing a local language model as I'm in the middle of doing it myself. Turns out, Grok isn't big into making an image of Netanyahu with a missile up his rear. So I opted to install ComfyUI to work as a backbone to Krita so I can do it myself. Didn't work as I think ROCm 7.1 isn't supported on RDNA 2, but I'm not sure. But also, ComfyUI eats a ton of storage. I don't need to fight Google over who gets to install which LLM on my limited local storage of 2TB. Consent would be nice.
 
Do you remember that message also?

I was at my desk at work when I read that message. People all around me were talking about it. Too bad it was a hoax.
No, I didn't have internet access until 1988, and kremvax was a running gag by then.
 
That was before lots of the guys here were even born.
I remember 2015, it was a better year and a better era.
Also, many individuals joined this forum in high school, so yeah, 2015 may be before a few of them were even alive. :oldman:

I doubt it. I bet the avg age of us is 30+ do new people even come here anymore? I kinda feel like fourms/ build your own PC is dieing off to the young ones
Nah, join Homelab on Discord and quite a few retro communities on there, shit tons of young to old people on them.
Forums like this are really getting to be for those of us who were alive in the 20th century.

However, much like the IRC channels of old, Discord channels come and go, and one day their chat history and knowledge will be lost forever when they are gone.
We just happened to hit that middle ground of PCs, forums, freedom, and information longevity during the 1990s to 2010s.
 
Last edited:
Back
Top