Something I’m thinking about rather often is storing something outside of NDEF and using that for authentication stuff.
e.g. the string “NExT” is stored at the end of NExT’s memory.
I know you CAN store data there because if you write a big NDEF record, then a smaller one and then look at all the memory, you still see some relicts of the old NDEF stuff.
This is especially interesting for the xSIID, as It has over 1Kb of non-ndef-space we could use to store something.
I’m thinking I should store an encrypted password on there, decrypted by my computer.
(Computer knows 1 part of the secret, I have 1 part of the secret, maybe even a 3rd part as a real small password.)
I could easily use that as password manager, if I had an App to read that.
I’ve played with Tasker, but can’t seem to read the memory exceeding valid NDEF records…
I don’t really have a question, just wanted to leave this thought somewhere (or did I before, whatever…)
You would be relaying on obfuscation. But I guess is valid. I designed just for fun a little arduino display that will read the contents of the 2nd record of a NdEF to print that in the device.
Most of phones will just read the first record unless specifically looking to read all records. Of course only me knows how to set it up.
(After writing this I want to clarify I’m responding to you but also just putting information out there so other people are aware of it too, don’t want to sound patronizing or anything)
So like I’m not an expert on crypto but this doesn’t seem like a super secure idea to me. If you’re not worried about the password being cracked then I don’t see a big reason not to do it, but if you are worried I’d strongly recommend keeping the password in a hardware-backed keystore/keychain that can reliably enforce throttling unlock attempts. Most modern Android phones can do this and I think every modern iOS device can. I might look into writing software to do this, because I think I’ve seen talk of doing this (separating a secret into chunks) in a few places now and this issue with putting most of a secret in an NDEF record is that someone can trivially brute-force the rest of it if you put enough of it down in the NDEF record because the math doesn’t really work in an intuitive way. For example, if you have a 12 character password that you want to partially keep on your implant, you could store 6 characters of it in an NDEF record and remember the remaining 6, but I wouldn’t recommend it. The time to brute force a 12-character password if all chars are lowercase is about 80,000 days with a modern PC, which isn’t bad. Once you store six characters in an NDEF record though, an average computer can crack the remaining six characters in less than a minute.
If you store the password encrypted with a hardware-backed AES key though, the hardware can ensure that nobody is allowed to extract the AES key encrypting your password, and the key can only be used by someone who correctly provides a PIN to the keystore implementation. If the keystore throttles you to one pin guess-attempt per second, you can use a 6 character lowercase word for your PIN and instead of <1minute, it’ll take 5 years to crack your PIN. By then, with any luck, you’ll realize that someone has stolen your phone and you’ll be able to rotate the compromised password.
Hopefully that made sense. I might actually throw together some code to do this because there should be better options than storing partial secrets in plaintext! Easy options
This obviously breaks down if your threat model includes being your implant being scanned and your phone being broken into (not even necessarily at the same time), but at some point you do just have to give up (or add another password into the mix and take it to the grave).
Yes, just splitting a password in two doesn’t help if it is unencrypted. If the password is stored in an encrypted form, and the device uses it as part of a challenge response authentication then it could be secure but at that point you are basically reimplementing an existing solution. And you might just want to wait to use a Vivokey Apex.
If that’s true that’s one situation where an xDF2 might be advantageous because I think its challenge response tech is a shared-key scheme where both parties prove they know the key without giving it up. But yeah I’ve been thinking a lot about the implications of having what amounts to a yubikey in my body. That’s essentially how I’d treat an apex. It doesn’t seem perfect. I’d still want to pin-protect any keys on it and ensure I trust any device I enter the pin into, because it would be trivial for a malicious device to decrypt or sign something extremely sensitive instead of decrypting the email I’m trying to read. Commercial solutions use smartcard readers with built-in PIN pads so you don’t have to trust any computer with your PIN but I don’t think implanting a PIN pad would go particularly well.
And sorry I think I missed that you planned to encrypt the secret before it went into your implant.
If you store your keys on a smart card (e.g. yubikey) then the private key is only stored on the smart card and the encryption is done there. It is not normally possible to pull the key back from the smart card so if someone steals your pin they also need your smart card before they can use it.
Given the Northstar implant I am sure there are some people who would implant a pin pad.
Of course, yeah the key is never extracted, but when even allowing an adversary to use a key once can be disastrous like for example a certificate authority’s root cert, it’s important to prevent that by either pin-protecting the usage of the key like you can do with yubikeys or some sort of k-of-n scheme, or both. PIN-protecting the key usage isn’t very useful though if an adversary compromises the computer that is used to enter the PIN, which is why some solutions do PIN entry purely in hardware. I looked around and found the Thales IDBridge CT700 as an example. So by commercial systems I mean extremely high security stuff.
This has me thinking now. I wonder how hard it would be to do secure PIN entry with one of those multipole radial magnets that are normally used for magnetic encoders. If you had coils placed correctly in an implant you could harvest energy from a magnet’s rotation (outside your body) and simultaneously sense how many poles have gone by, which could theoretically let you build a combination lock implant. It would probably be really difficult to harvest enough energy from the rotation to keep the PIN data in SRAM long enough for the implant to be subsequently read by a device though. I bet you’d have to be really quick about it unless you stuck a giant capacitor under your skin to feed the SRAM chip. This would be well beyond “security” and into tinfoil hat territory, but it sounds cool as hell.
I’m not 100% clear on what “device uses it as part of a challenge response authentication” means, so you might be right… but in general, simply storing encrypted data for use as part of an authentication process does not work to secure anything. Let’s say someone was able to read your xNT (NTAG216) tag and the stored encrypted data… sure they cannot decrypt it, but they don’t need to… they can program a flexMN magic NTAG with the same UID, signature, and encrypted data… the reader and applications beyond it cannot tell the difference and will happily read the encrypted data off the fraudulent tag and assume it is legit. This is why chips must actually perform encryption internally, not just store it.
Maybe I’m making a moot point above … I am low on time today and just skimmed the thread… sorry if this is inane.
It depends entirely on how the applets are programmed to function. For example, the OTP applet can be set up to require a simple password to be sent to the chip in order to perform code generation. It’s a very simple password mechanism that could be sniffed to get the password (not the TOTP code sources) … but it doesn’t have to be simple… it could be complex as shit and use public key enrollment etc if you wanted to get crazy with it… and you can easily make these solutions and deploy to your Apex through a Fidesmo’s developer account… or we can merge your pull requests into our open source applets and deploy a hardened upgraded version. That’s the true beauty of Apex.
yes this can be done, but honestly a public key enrollment process from each authorized reader is enough… use a sufficiently long static “PIN” (password) to enable enrollment in a relatively secure environment… then “daily use” is a public key challenge affair… no unauthorized reads in the wild.
Oh sure I know what a challenge response is but I guess my point is that if the encrypted data is simply being stored and read off an implant then there is no active cryptography going on there and the encrypted data could simply be copied to another chip or simply emulated and pass as a valid chip… hence no protection at all really.
Basically what I’m reading is that you want to include the chip in some sort of authentication process. You will be reading some static data off the chip and using that as part of this authentication process. Regardless of encryption or anything fancy going on, the problem is centered around the “reading static data” bit. Whether that’s the UID, some plaintext, or a bit of encrypted data… it is still simply being read off the chip as static data. Therefore we are simply saying that anyone could still just copy/emulate that static data and impersonate the chip contingent of your security mechanism, however complex that might be outside the act of reading static data off the chip.
I will say that the complexity of splitting encrypted data to unlock a password manager is “more secure” in that, without that static data, an attacker is pretty much hosed… but… once they have a read or a sniff of that static data it’s game over. That’s all I’m saying.