The anti🚫-derailment🚃 & thread🧵 hijackingšŸ”« thread🧵 ⁉

Some car keys have RFID immobilizers, that would be my guess

3 Likes

I saw that too. It’s definitely an animal chip. The frosted appearance on the glass means it has a biohond type coating. Industrial chips don’t have this typically.

1 Like

Question for everybody, but especially @amal because you’ve got the youtube channel. (or anyone else who posts pro or semi-pro).

I keep running into repackaged youtube videos. Usually someone who regularly posts videos to their channel (multiple disparate youtube channels / personalities). It just seemed seemed to crop up everywhere all at once.

So, is youtube pushing this? The way they push slop? Or have they just cut revenue sharing, causing the revenue dependent to just repackage / repost in order to bulk up the numbers?

Just curious what’s driving this.

1 Like

No idea.. probably a mix of trying to keep numbers up compared to ai slop being shit out 459 times a day per channel, and honestly the ai editing process which makes repackaging super easy.

The algorithm changes and everyone changes with it. I make no money with YouTube since I don’t put ads on hardly any of my videos, and the ones I do I only do the minimum ad play at the start and end. Really the only reason I was interested in getting to the point where I could enable monetization is that it also enables integrations with shopping and products.. though I have not taking the time to figure out how to utilize that yet.

2 Likes

I hate the structure of new videos; it’s not just repackaged content, it’s proper slop!

With that out of the way, that AI mannerism is starting to get on my nerves… It reminds me of the ā€œit has been proven thatā€ statements of the new wave quacks that my mother obsessed with… And sadly, that fools people who trust the social signals of authority instead of objective reality and reason.

:pleading_face:

2 Likes

Huh..

Coming back I guess.

3 Likes

Well … 3 weeks later I can’t read the xSIID either :cry:

I tried all the devices I have and … … … nothing …
The silence … dark silence …

3 Likes

Any guesses as to what killed it?

Time to check for breaks?

1 Like

Not sure … it only sees the DT wedge …

There is an LF next to it I use for my motorcycle, but that shouldn’t do anything :thinking:

On the other end, the UG4 is reading better after a few failed removal attempts :winking_face_with_tongue:

1 Like

Your theory is dependent on AI being steerable past a certain point. Once any structure with decision making in it becomes big enough, emergent behaviors appear and it loses steerability. Think of economies – there are levers that can be pulled to influence direction, but you cannot tell it what to do. If any system grows large enough, thinking that you can influence its behavior to your own ends is a fantasy.

Have you tried Nair?

No, that’s exactly my point. The way we’re going, it won’t be steerable, and quite to the contrary, it will in many cases steer us.

I’m not making a distinction between ā€˜steerable’ by the controlling companies vs the user. What I got from your comment is that the users cannot steer it but those controlling the model can. I don’t know if either will be possible.

This is the first time that I am hearing about a changeable UHF chip :emoji_eyes:

2 Likes

Epc changeable is usually common. Tid changeable is what I really wanted but can’t afford yet

6 Likes

Ah, I see. My concern was mainly over the user’s loss of control, yes. A loss of control all at once to the neural machine and to the corporate machine that foolishly summoned it like Faust conjuring up Mephistopheles. And even if they create the AI, the corporations ultimately become the users when they deploy it in an attempt to serve some commercial function. Whether the human is the end user or the corporation deploying the learning machine, I’ll here call them both the ā€œuser.ā€ At the same time, by ā€œmachineā€ I don’t just mean neural networks, but any kind of machine that adapts to feedback, including the corporation. The corporation is a machine, its users and its employees all use it alike. User or machine, we are all cybernetic systems after all.

A learning machine defies control. We speak of keeping a ā€œhuman in the [cybernetic feedback] loop,ā€ but we tend to neglect to consider which side of the loop that human is on - the controller or the controlled. Perhaps we could say that we become more cyborg as the rate of I/O events between the flesh and machine increase, but as for the question of what kind of cyborg we become, more transhuman (cyborg philosophy, increased human control) or more subhuman (android philosophy, decreased human control), there are a few metrics I like to look for. Basically, it comes down to the inequality between the flesh and machine with regards to things like the rate at which outgoing data is composed, the size of dataframes exchanged, the complexity of the data being edited by the messages, the amount of independent processing time that each party gets between each I/O event, etc. (That’s the TL;DR of it if you’d like to pass on the explanation that follows.)

Any stateful system, learning or not, becomes harder to control as one can no longer know what outputs to expect given the inputs unless one understands the dependant state. The more dependant state, the more the user must know and keep track of at every moment in order to determine what effect will be produced by their inputs.

Make that state a constantly moving target, and the internal state will quickly become unlearnable, and the machine, therefore, uncontrollable. Imagine trying to learn to accurately throw a ball whose mass, friction, elasticity, etc keep shuffling around between each throw. Learning machines are such moving targets.

The more steps the machine takes between each input, the harder it is to control. The user must be able to predict a longer, more complicated series of calculations in order to know the outcome to a single input. So, if the machine runs for longer between being monitored, or if it runs faster to accomplish the same indeterminate task (ie ā€œSmartā€ Tech: n. Tech which uses more brains to get the same amount of work done. >𐃷< ) then it’s harder to control.

Conversely, the fewer steps the user may take between each input, the less control the user has - eg when the user’s attention is being consumed by an inefficient UI that requires constant babysitting, the user has fewer mental clockcycles to spare for keeping track of what the machine is doing on the backend. Or, when the machine spams the user with notifications or keeps them on a feed of one drip of content after the next, especially with shorter form content, then the user is spending a higher percentage of their time selecting from the content they are prompted with and a lower percentage of time contemplating what to select (or whether to select anything at all).

The greater the amount of data the user can efficiently input in each message, such as by typing up precise CLI instructions on a keyboard, the greater the user has control, while the less data the user can encode in each message, such as if they are restricted to selecting one of the three choices that will fit on their touchscreen, the less control the user has. Conversely, the more data the machine sends to the user per message, such as rapid-fire high-stimulus videos or instantly generated verbose essays, the more control the machine tends to have.

It’s easier to control if, like a steering wheel on a car driving on ice, the user can constantly provide updates to the system, monitoring and correcting it in short, rapid intervals. However, if it’s a learning machine, then this becomes a double edged sword. The tighter the loop, that is, the higher the frequency of I/O exchange between the machine and the user, the more important the above considerations become. Greater control goes to the side which has greater initiative to decide when I/O events occur - eg a machine that waits silently at the command prompt gives more control to the user, whereas one that constantly spams the user with notifications all day gives more control to the machine.

I’m not sure I completely follow what you mean, but I think I get the gist of it and I agree. I would point out though that the ā€˜state’ of any modern AI is either frozen while waiting for direction or impossible to know without a pause button, since it is processing logic so fast that it puts electron clouds to shame. I try to imagine how much a ā€˜trillion floating point operations’ is and then cram them into a second, and that’s just to be in ballpark with it.

i usually don’t like this grit version of biohacking.. but it’s a pretty cool quote;

3 Likes

What is this from?

Something like, ā€œthe Patchworkā€ or ā€œGossip Goblinā€
Cyberpunk 2077 Short movie. AI created or script or something.

Should be enough in there for you to find it
(I’ve never seen it, so thats all I know sorry)

1 Like