Some car keys have RFID immobilizers, that would be my guess
I saw that too. Itās definitely an animal chip. The frosted appearance on the glass means it has a biohond type coating. Industrial chips donāt have this typically.
Question for everybody, but especially @amal because youāve got the youtube channel. (or anyone else who posts pro or semi-pro).
I keep running into repackaged youtube videos. Usually someone who regularly posts videos to their channel (multiple disparate youtube channels / personalities). It just seemed seemed to crop up everywhere all at once.
So, is youtube pushing this? The way they push slop? Or have they just cut revenue sharing, causing the revenue dependent to just repackage / repost in order to bulk up the numbers?
Just curious whatās driving this.
No idea.. probably a mix of trying to keep numbers up compared to ai slop being shit out 459 times a day per channel, and honestly the ai editing process which makes repackaging super easy.
The algorithm changes and everyone changes with it. I make no money with YouTube since I donāt put ads on hardly any of my videos, and the ones I do I only do the minimum ad play at the start and end. Really the only reason I was interested in getting to the point where I could enable monetization is that it also enables integrations with shopping and products.. though I have not taking the time to figure out how to utilize that yet.
I hate the structure of new videos; itās not just repackaged content, itās proper slop!
With that out of the way, that AI mannerism is starting to get on my nerves⦠It reminds me of the āit has been proven thatā statements of the new wave quacks that my mother obsessed with⦠And sadly, that fools people who trust the social signals of authority instead of objective reality and reason.
![]()
Huh..
Coming back I guess.
Well ⦠3 weeks later I canāt read the xSIID either ![]()
I tried all the devices I have and ⦠⦠⦠nothing ā¦
The silence ⦠dark silence ā¦
Any guesses as to what killed it?
Time to check for breaks?
Not sure ⦠it only sees the DT wedge ā¦
There is an LF next to it I use for my motorcycle, but that shouldnāt do anything ![]()
On the other end, the UG4 is reading better after a few failed removal attempts ![]()
Your theory is dependent on AI being steerable past a certain point. Once any structure with decision making in it becomes big enough, emergent behaviors appear and it loses steerability. Think of economies ā there are levers that can be pulled to influence direction, but you cannot tell it what to do. If any system grows large enough, thinking that you can influence its behavior to your own ends is a fantasy.
Have you tried Nair?
No, thatās exactly my point. The way weāre going, it wonāt be steerable, and quite to the contrary, it will in many cases steer us.
Iām not making a distinction between āsteerableā by the controlling companies vs the user. What I got from your comment is that the users cannot steer it but those controlling the model can. I donāt know if either will be possible.
This is the first time that I am hearing about a changeable UHF chip ![]()
Epc changeable is usually common. Tid changeable is what I really wanted but canāt afford yet
Ah, I see. My concern was mainly over the userās loss of control, yes. A loss of control all at once to the neural machine and to the corporate machine that foolishly summoned it like Faust conjuring up Mephistopheles. And even if they create the AI, the corporations ultimately become the users when they deploy it in an attempt to serve some commercial function. Whether the human is the end user or the corporation deploying the learning machine, Iāll here call them both the āuser.ā At the same time, by āmachineā I donāt just mean neural networks, but any kind of machine that adapts to feedback, including the corporation. The corporation is a machine, its users and its employees all use it alike. User or machine, we are all cybernetic systems after all.
A learning machine defies control. We speak of keeping a āhuman in the [cybernetic feedback] loop,ā but we tend to neglect to consider which side of the loop that human is on - the controller or the controlled. Perhaps we could say that we become more cyborg as the rate of I/O events between the flesh and machine increase, but as for the question of what kind of cyborg we become, more transhuman (cyborg philosophy, increased human control) or more subhuman (android philosophy, decreased human control), there are a few metrics I like to look for. Basically, it comes down to the inequality between the flesh and machine with regards to things like the rate at which outgoing data is composed, the size of dataframes exchanged, the complexity of the data being edited by the messages, the amount of independent processing time that each party gets between each I/O event, etc. (Thatās the TL;DR of it if youād like to pass on the explanation that follows.)
Any stateful system, learning or not, becomes harder to control as one can no longer know what outputs to expect given the inputs unless one understands the dependant state. The more dependant state, the more the user must know and keep track of at every moment in order to determine what effect will be produced by their inputs.
Make that state a constantly moving target, and the internal state will quickly become unlearnable, and the machine, therefore, uncontrollable. Imagine trying to learn to accurately throw a ball whose mass, friction, elasticity, etc keep shuffling around between each throw. Learning machines are such moving targets.
The more steps the machine takes between each input, the harder it is to control. The user must be able to predict a longer, more complicated series of calculations in order to know the outcome to a single input. So, if the machine runs for longer between being monitored, or if it runs faster to accomplish the same indeterminate task (ie āSmartā Tech: n. Tech which uses more brains to get the same amount of work done. >š·< ) then itās harder to control.
Conversely, the fewer steps the user may take between each input, the less control the user has - eg when the userās attention is being consumed by an inefficient UI that requires constant babysitting, the user has fewer mental clockcycles to spare for keeping track of what the machine is doing on the backend. Or, when the machine spams the user with notifications or keeps them on a feed of one drip of content after the next, especially with shorter form content, then the user is spending a higher percentage of their time selecting from the content they are prompted with and a lower percentage of time contemplating what to select (or whether to select anything at all).
The greater the amount of data the user can efficiently input in each message, such as by typing up precise CLI instructions on a keyboard, the greater the user has control, while the less data the user can encode in each message, such as if they are restricted to selecting one of the three choices that will fit on their touchscreen, the less control the user has. Conversely, the more data the machine sends to the user per message, such as rapid-fire high-stimulus videos or instantly generated verbose essays, the more control the machine tends to have.
Itās easier to control if, like a steering wheel on a car driving on ice, the user can constantly provide updates to the system, monitoring and correcting it in short, rapid intervals. However, if itās a learning machine, then this becomes a double edged sword. The tighter the loop, that is, the higher the frequency of I/O exchange between the machine and the user, the more important the above considerations become. Greater control goes to the side which has greater initiative to decide when I/O events occur - eg a machine that waits silently at the command prompt gives more control to the user, whereas one that constantly spams the user with notifications all day gives more control to the machine.
Iām not sure I completely follow what you mean, but I think I get the gist of it and I agree. I would point out though that the āstateā of any modern AI is either frozen while waiting for direction or impossible to know without a pause button, since it is processing logic so fast that it puts electron clouds to shame. I try to imagine how much a ātrillion floating point operationsā is and then cram them into a second, and thatās just to be in ballpark with it.
What is this from?
Something like, āthe Patchworkā or āGossip Goblinā
Cyberpunk 2077 Short movie. AI created or script or something.
Should be enough in there for you to find it
(Iāve never seen it, so thats all I know sorry)

