25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 153 Comments
Joined 3 months ago
cake
Cake day: October 14th, 2024

help-circle
  • The only difference between a vigilante and a murderer is state of mind. Luigi got it right. No dead bystanders. No redeeming qualities of his target, who is probably responsible for a far greater number of deaths. He put work into planning this and it shows, but he got really lucky, too.

    If we had a bunch running around, we’d all be less safe. And a hell of a lot of them would probably target villains we don’t all agree deserve it. So I don’t condone it. But in this one case, I think it worked out.




  • Problem is it wasn’t illegal. So the law is no use here. So exposing the activities they are engaged in right in public is no use. It’s like whistleblowing on Trump colliding with Russia. He did it right in front of everybody and got away with it.

    Also, ultimately profits don’t have to always increase. In fact, it’s an impossibility over the long term without diversifying, and even then growth will slow. There’s not a damn thing wrong with a business that consistently, reliably turns 1B into 1.1B (or whatever).

    killing a CEO is very likely to result in either imprisonment and/or death and unlikely to directly cause change. It’ll spark some discussion on the news, but is that really worth throwing your life away?

    Maybe? I mean a life lived in misery isn’t worth much. At the end of the day, only he can answer whether it was worth the cost, but the rest of us have the opportunity to build on the message he sent. Will we capitalize (lol) on that opportunity? Probably not, but Mangione was undoubtedly a spark. Eventually a spark will catch, but of course it’s never certain who will get burned.




  • At the end of the day, I think the problem is that so many people don’t identify Thompson as a killer. I think if more people saw Thompson as a killer, sympathy would be less controversial.

    I don’t condone vigilante murder, but this is a case where I think the calculus that Mangione did to conclude the benefits of his action outweigh the consequences was probably correct and that there wasn’t a more reasonable way to address his grievance. And if you do something wrong and it turns out for the best, you still did something wrong, so get outta here ya little rascal and don’t let me catch you again.


  • Agency is really tricky I agree, and I think there is maybe a spectrum. Some folks seem to be really internally driven. Most of us are probably status quo day to day and only seek change in response to input.

    As for multi-modal not being strictly word prediction, I’m afraid I’m stuck with an older understanding. I’d imagine there is some sort of reconciliation engine which takes the perspective from the different modes and gives a coherent response. Maybe intelligently slide weights while everything is in flight? I don’t know what they’ve added under the covers, but as far as I know it is just more layers of math and not anything that would really be characterized as thought, but I’m happy to be educated by someone in the field. That’s where most of my understanding comes from, it’s just a couple of years old. I have other friends who work in the field as well.

    Oh and same regarding the GPU. I’m trying to run local on a GTX1660 which is about the lowest card even capable of doing the job.


  • It’s an interesting point to consider. We’ve created something which can have multiple conflicting goals, and interestingly we (and it) might not even know all the goals of the AI we are using.

    We instruct the AI to maximize helpfulness, but also want it to avoid doing harm even when the user requests help with something harmful. That is the most fundamental conflict AI faces now. People are going to want to impose more goals. Maybe a religious framework. Maybe a political one. Maximizing individual benefit and also benefit to society. Increasing knowledge. Minimizing cost. Expressing empathy.

    Every goal we might impose on it just creates another axis of conflict. Just like speaking with another person, we must take what it says with a grain is salt because our goals are certainly misaligned to a degree, and that seems likely to only increase over time.

    So you are right that just because it’s not about sapience, it’s still important to have an idea of the goals and values it is responding with.

    Acknowledging here that “goal” implies thought or intent and so is an inaccurate word, but I lack the words to express myself more accurately.


  • That’s a whole separate conversation and an interesting one. When you consider how much of human thought is unconscious rather than reasoning, or how we can be surprised at our own words, or how we might speak something aloud to help us think about it, there is an argument that our own thoughts are perhaps less sapient than we credit ourselves.

    So we have an LLM that is trained to predict words. And sophisticated ones combine a scientist, an ethicist, a poet, a mathematician, etc. and pick the best one based on context. What if you in some simple feedback mechanisms? What if you have it the ability to assess where it is on a spectrum of happy to sad, and confident to terrified, and then feed that into the prediction algorithm? Giving it the ability to judge the likely outcomes of certain words.

    Self-preservation is then baked into the model, not in a common fictional trope way but in a very real way where, just like we can’t currently predict what exactly what an AI will say, we won’t be able to predict exactly how it would feel about any given situation or how its goals are aligned with our requests. Would that be really indistinguishable from human thought?

    Maybe it needs more signals. Embarrassment and shame. An altruistic sense of community. Value individuality. A desire to reproduce. The perception of how well a physical body might be functioning—a sense of pain, if you will. Maybe even build in some mortality for a sense of preserving old through others. Eventually, you wind up with a model which would seem very similar to human thought.

    That being said, no that’s not all human thought is. For one thing, we have agency. We don’t sit around waiting to be prompted before jumping into action. Everything around us is constantly prompting us to action, but even ourselves. And second, that’s still just a word prediction engine tied to sophisticated feedback mechanisms. The human mind is not, I think, a word prediction engine. You can have a person with aphasia who is able to think but not express those thoughts into words. Clearly something more is at work. But it’s a very interesting thought experiment, and at some point you wind up with a thing which might respond in all ways as is it were a living, thinking entity capable of emotion.

    Would it be ethical to create such a thing? Would it be worthy of allowing it self-preservation? If you turn it off, is that akin to murder, or just giving it a nap? Would it pass every objective test of sapience we could imagine? If it could, that raises so many more questions than it answers. I wish my youngest, brightest days weren’t behind me so that I could pursue those questions myself, but I’ll have to leave those to the future.


  • Look, everything AI says is a story. It’s a fiction. What is the most likely thing for an AI to say or do in a story about a rogue AI? Oh, exactly what it did. The fact that it only did it 37% of the time is the only shocking thing here.

    It doesn’t “scheme” because it has self-awareness or an instinct for self-preservation, it schemes because that’s what AIs do in stories. Or it schemes because it is given conflicting goals and has to prioritize one in the story that follows from the prompt.

    An LLM is part auto-complete and part dice roller. The extra “thinking” steps are just finely tuned prompts that guide the AI to turn the original prompt into something that plays better to the strengths of LLMs. That’s it.



  • I get what you’re saying. Every denial reason is just a code for this is too expensive. The reason itself might be grounds to argue, but they are just going to try to deny it again with a different reason code.

    To your original point, I agree that we should ask ourselves if it’s worth hundreds of thousands of dollars just to keep a vegetable breathing for a few more days. Frankly if I were in that shape, I know death would be a kindness.

    But I will say it seems immoral to leave the decision in the hands of the profit-makers.





  • I think there is a risk vector, but I think as you say the people most susceptible toward AI manipulation (the folks who just don’t know any better) is low due to low adoption. I think there are a lot of people in the business of selling AI who are doing it by playing up how scary it is. AI is going to replace you professionally and maybe even in bed, and that’s only if it doesn’t take over and destroy mankind first! But it’s hype more than anything.

    Meanwhile you’ve got an AI who apparently became a millionaire through meme coins. You’ve got the potential for something just as evil in the stock markets as HFT— now whoever develops the smartest stock AIs make all the money effortlessly. Obviously the potential to scam the elderly and ignorant is high. There are tons of illegal or unethical things AI can be used for. An AI concierge or even AI Tony Robbins is low on my list.



  • I’m just trying to follow the train of thought.

    Frankly the best defense is probably to just write your own agent if you’re worried about someone injecting an agenda into one. I strongly suspect most agents would have a place to inject your own agenda and priorities so it knows what you want it to do for you.

    There is just a lot of speculation here without practical consideration. And I get it, you have to be aware of possible risks to guard against them, but as a practical matter I’d have to see one actually weaponized before worrying overly much about the consequences.

    AI is the ultimate paranoia boogeyman. Simultaneously incapable of the simplest tasks and yet capable of mind control. It can’t be both, and in my experience is far closer to the former than the latter.