is apple intelligence safe?

preview_player
Показать описание
Apple Intelligence is interesting... and not terrible if done correctly.

🛒 GREAT BOOKS FOR THE LOWEST LEVEL🛒

🔥 SOCIALS 🔥
Рекомендации по теме
Комментарии
Автор

As an AI researcher, I can say with confidence that the real benefits of AI are advances most people will never hear about. It's definitely not what the big tech companies are hyping up and shoving in every application.

existenceisillusion
Автор

Pretty sure the "memory is encrypted while it runs and and at rest" part at 6:41 or so is a bit of an inaccuracy. The memory is isolated when the application runs, which is not the same thing. This isolation protects against things like cache snooping, but the actual execution is done on unencrypted data.

Put another way, if you wanted to execute on encrypted data, then you would have to have something like a generic framework for on-the-fly application of fully homomorphic encryption and it would come at a significant performance penalty.

Tristam
Автор

Realm has a LOT of layers of "just trust me bro" for me to trust that it's completely private, especially when it comes to encryption.

zimmerderek
Автор

I don't understand what prevents the server to just give back a fake code signature and then instead run other code that exports data, given also that this is not open source software, the only piece of code on the client that this whole infrastructure is based upon is server.checksum == magicstring?
I don't think the infrastructure in which the server code is running was ever a privacy issue, the issue was (and still is) the server code itself

eliamaggioni
Автор

ARMv9 CCA doesn't encrypt realm memory, just prevents hypervisor from access via a bounce buffer, which is just cheaper alternative to AMD SEV-SNP/Intel TDX that actually encrypt memory. The implication is significant. The main point of these hardware TEEs is to prevent rogue admin attack vectors. CCA leaves a huge backdoor for DMA and physical memory analysis, where people can just spray the memory with coolant and take them out and attach to a memory analyzer and read all the content, which is a well documented attack against hibernated laptop with FDE.

vicaya
Автор

Assuming there are really no hardware level attacks (big assumption), It seems like PKE is the only reliable way to ensure data gets into the realms securely. But then again that means someone is having a key. And when pressed for or stolen, why wouldn't this allow for a MITM?

It sure as hell makes it more complicated - but I don't see how this architecture makes things so much more secure yet. It would be interesting to hear more about this.

tcurdt
Автор

Apple: we have new cool insane features.
People: what features?
Apple: we don't know.

norbert.kiszka
Автор

Did he just say I was living under a rock?
That's kinda rude...
:(

Alfred-Neuman
Автор

"a computer cannot be held accountable, so a computer must never make management decisions" -- IBM, 1970s.

foobarf
Автор

No amount of software can change the fact that if you control the hardware you control the software running on it. The rest is just marketing in the end

monacolulu
Автор

I do not trust this one byte. The encryption, the verification, the implementation, none of it

Blackholefourspam
Автор

Apple's (or any proprietary vendosr) SEV-SNP/TDX/CCA impls could only be a security through obscurity theatre that hides behind the complex attestation process if the workload is not open source, i.e., there is no way for end users to find out whether the encrypted workload doesn't do anything funny. The fundamental premise of CC is that the owner of the workload knows exactly what's running inside a TEE through cryptography. Unless Apple open source the workload and let everyone inspect it and hashed to ensure integrity of the workload, their PCC is ultimately BS. I hope that Apple do the right thing, as it would have profound impact on the entire SaaS ecosystem and dramatically boost open source community. AGPLv3 for the win!

vicaya
Автор

Your still frames when opening the vids, but not playing it, are as always amazing. :D

hndt
Автор

the PCC white paper is pretty interesting, and goes into detail about how everything works

DominicGo
Автор

I'm gonna go back under this rock I've been living, never heard about "apple intelligence" s**t there.

zxuiji
Автор

_Execution Level Zero_ sounds like a movie featuring Michael Scarn.

rtothec
Автор

While I do appreciate the option to do cloud computing privately, my reason for wanting local-first (in my own computing, at least) is agency. I want to have the most possible control over my computing, so in the cases where I would actually use "AI" (translation, voice-to-text, etc.) I want the software running locally. I've seen some discussion about "CRDTs" for local-first document collaboration, and that seems intriguing.

reillyspitzfaden
Автор

They said exactly the same thing on like the iPhone 3 or something, where they said it needed a new CPU to use the first Siri so you needed a new phone, then someone reverse engineered it and found that it was just making a web request and they could do it from Android, as long as they had a device id from an iPhone.

georgehelyar
Автор

I think AI itself can be used for a lot of good things, especially when it comes to accessibility. Like Firefox is experimenting with automatically creating descriptions for images for people who use screen readers. That's great. And I heard once about a service or an idea for a service where if you got a scam call, you could redirect it to an AI that's just trying to waste their time. Also great. But as for what the big companies like MS are doing... I don't see much value in it, tbh. Not everything needs AI in it.

For Apple, I hope they manage to make it private, so from a technical point of view I'm curious about this. Not going to use it, but the privacy pov is interesting.

the-answer-is-
Автор

I agree about the step in the right direction and that no matter how secure something seems there can still be a hole, but the entire time you were explaining the architecture I was just thinking "What if they tell you all your data is going to this secure place, but actually send it to a regular old server?"

Whether its the self destructive desire for gov'ts to have backdoors in software, or cutting costs (since the regular servers are prob going to be in greater supply for a long time), or any other reason, I can imagine this happening. Is it ultimately just the company's word where this data is going?

prw