Claude Code Opus 4.7 keeps checking on malware

by decide1000 63 comments 69 points
Read article View on HN

63 comments

[−] pluc 27d ago
AI killed curiosity. At least Google made you search and look at alternatives, AI just gives you solutions, whether right or wrong.

In a few years, the cognitive decline will be obvious.

The only people who remain curious are the people who actively want to, despite AI, and most of the time against it.

Our ability to keep digging into things is entirely tied to the will of the people controlling AI to let us do so. Knowledge used to be power; now knowledge is money and they won't let us have it for much longer.

[−] hrimfaxi 27d ago
AI enables curious people to explore. Why do you say it kills curiosity? If anything, it's so recognizable with output I'd say it kills creativity.
[−] pluc 27d ago
It enables people to solve, not explore. It's a solution engine not a curiosity engine. Getting effortless answers at every turn is the opposite of curiosity.
[−] agubelu 27d ago
Strong disagree. One of my favorite use cases for LLM chatbots is to satisfy random niche curiosities whenever they cross my mind and get pointers for further reading. This often leads to going down some niche rabbit hole and learning some interesting stuff in the process.

Whenever I tried the same with Google in the past, more often than not I couldn't find what I was looking for, because I didn't know the correct keywords to search for in order to start getting relevant results. With ChatGPT & co. I can just pose the question in natural language, get results and continue exploring.

[−] Brendinooo 27d ago
A couple of weeks ago I was interested in how people have interpreted the Tower of Babel narrative over time, so I used Claude to do a bunch of research to identify interpretations over time and look for historical trends. I don't think it "solved" anything, and it all felt more curiosity-driven. It led to a bunch of in-person conversations and followup questions.

So I guess I'd say it's more about how you're using the tool and what kinds of problems you're looking to solve with it. A calculator can be dinged for getting effortless answers at every turn or it can be praised for enabling a higher volume of solved math problems and enabling more complex work for a broader set of people.

[−] DangitBobby 27d ago
It gets me past the non-productive barriers and allows me to explore problems and scenarios I could never have done before due to impossible to justify time cost for myself and expense for my clients.
[−] lemoncookiechip 27d ago
That's a deeply cynical way of seeing things. Grabbing a book to search for an answer is no different than being told the answer is on page 153 line 6 by someone else. It's about what you as an individual is seeking from the activity.

If you're just copy-pasting answers and you don't internalize what is being said, sure, you're not being curious or more importantly, learning. This DOES NOT mean that every person who engages with an LLM is doing that or doing it every time, and just like using a search engine or grabbing a book can lead you into interesting rabbit holes, so can an LLM, it's just a matter of how fast and to want end.

The real issue is the hallucinations which for people unfamiliar with said topic, can lead them into believing what they're being told is a fact when it's not. Also LLMs like leaving out URLs and sources from their replies to save on tokens often if you don't remind them, that's also annoying.

This whole discussion is bunch of anecdotal evidence, which is fair, and as such I'll give my own. I've found myself engaging more with obscure topics that interest me via the LLMs than I did with a search engine because the barrier is lower. I don't have to sieve through horribly designed websites filled fluff that doesn't interest me, many with dozens of JS trying to run (UBO + noscript thumbs up) and in some cases demanding that certain JS run just for me to see some plain text, some slow to browse with topics hidden under sub-sub-menus. It's annoying and just one of many barriers. Others being language. etc...

[−] hrimfaxi 27d ago
It can enable people to go directly to solutions, but it also enables alternative paths. AI may not be nurturing creativity where it is not present but it doesn't seem to be responsible for people's disinterest in anything beyond their immediate need.

The real problem is that most people either don't see the value in or don't have the time to indulge in their curiosity. Even the language we use, indulgence to describe scratching that itch. How funny. Because curiosity is a luxury.

[−] lxgr 27d ago

> curiosity is a luxury.

It is indeed. Curiosity, for me, very often stems out of a particular kind of idleness and boredom, paired with a tricky question I can't find an immediate answer to.

And I can definitely still be bored that way even with LLMs.

[−] lxgr 27d ago
Speak for yourself. Looking at my LLM chat history, about 90% of my questions are focused on understanding systems better, not having it solve a concrete problem for me.

Do you never click through to the sources or experimentally test the information presented to you by the LLM? If not, who's stopping you? To me, this seems a bit like a tenured academic complaining about the abundance of research assistants working for them preventing them from properly understanding things anymore.

[−] Kon5ole 27d ago
I think it just changes the level where you spend your thinking.

You think things like "is the accordion a better user experience than the side tabs" instead of "why the f is the third accordion pane empty?"

Sure, the curiosity of figuring out where you made the mistake is gone, but that was never very valuable. It's just a detour that forces you to be curious about something else.

[−] debazel 27d ago
Until you explore "too deep" and get your whole account banned for suspicious activity and permanently grief your whole career.
[−] gck1 25d ago
And with Anthropic introducing KYC requirements, this is essentially a lifetime ban.

Fun times.

[−] leetrout 27d ago
Serious fear I have.

I brought it up two years ago and get downvoted when I brought it up a couple months ago.

There is a story on the front page right now about someone losing their child's family videos from a youtube ban. We hear about this stuff all the time. I suspect we are gonna be in somewhat of an arms race with AI products as the bubble grows over the next 18-24 months. This makes me worried about how disadvantaged people are going to be if they lose access to the better platform (whichever that ends up being).

Do you think AI is going to be so important that we would benefit from legal protections for access?

Or do you think the models and technology will become so small we will be able to personalize / decentralize the tech and it still be useful / competitive?

https://news.ycombinator.com/item?id=40784126

[−] ivankra 27d ago
Happening already. My new claude max account got instabanned after just a few messages asking it to debug some stuff for me, that they felt like a TOS violation. Nothing remotely controversial. The main model didn't even complain, some dumber background censorship model flagged it.
[−] mring33621 27d ago
Agree. I have learned so much, so rapidly, over the last 3 years, thanks to these AI tools.

These things can be a poisoned chalice, leading to weaker long-term performance, or they can be a force multiplier. It's up to you how you use them.

[−] rich_sasha 26d ago
Eh dunno. I've been gaslit (gaslighted?) by AI quite a few time. Along these lines: here's a design problem, how do I fix it? Oh known problem, here's the only sane way of doing it. Then I poke holes, AI tells me nonono, do like Computer say. Eventually relenting, telling me I'm right to push back, and doing a 180 turn. Then agreeing with me/adding options etc.

The RL metaoptimisation clearly sometimes pushes it to "here's one solution, end of story".

[−] lxgr 27d ago

> AI killed curiosity.

Only if you let yours be killed.

There will always be a demand for high-value signal, even though it might not be as easy to find anymore. But then again, has it ever been?

> Our ability to keep digging into things is entirely tied to the will of the people controlling AI to let us do so.

I have sympathy for that argument when it comes to locked bootloaders, closed-source software etc., but with AI? How? Is the existence of ChatGPT and Claude somehow preventing you personally from reading a book or looking at source code?

I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever.

[−] pluc 27d ago

> Is the existence of ChatGPT and Claude somehow preventing you personally from reading a book or looking at source code?

Microsoft owns CoPilot and controls GitHub, LinkedIn, etc

Google owns Gemini and control search results for most of the web

Meta owns whatever their model name is now and controls person-to-person relationships on the web

etc

It's up to any of them to flip the switch and make AI the default entry point when they decide that their AI isn't gaining enough traction. And then you can just hide the source data as proprietary information. Is it cynical? Sure, but I don't think we can say it's unlikely.

[−] thepasch 27d ago

> I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever.

This is what gets me every single time. I genuinely don’t think this is a hard realization to come to, and yet, the vast majority of arguments from both sides of the aisle, both proponents and antis, always assume that EITHER you do everything yourself, OR you have the AI do everything for you. If you use AI, you’re DOOMED to never think critically about anything anyone ever tells you ever again. If you don’t, you’re an idiot, because everyone else is using it, and skills and experience no longer matter because everyone can now do everything.

And this is on HN, too; supposedly, a site where experienced engineers, developers, and builders converge; the exact kind of demographic you’d expect to understand such a thing as nuance. And yet, your comment is one of very few. There’s someone RIGHT HERE, a few comments down, saying, verbatim, “it’s a solution engine not a curiosity engine. Getting effortless answers at every turn is the opposite of curiosity.” Treating curiosity as the end rather than the means, as if I stop being a curious person once I find an answer to a question I’ve been asking myself, or as if curiosity is some sort of “temporary status effect” that an answer/solution “consumes.”

And it seems to be worse than just “no one’s thought it through properly.” I’ve literally had someone show a fundamental incapability to understand the concept. I spent a non-trivial amount of effort writing out three comments with several paragraphs about how knowing your knowns and unknowns, and the fact that you have unknown unknowns, is the most important thing in any project, not just when it comes to AI. That these tools aren’t just doers, but also searchers. That they’re pretty much the best rubber ducky that’s ever been created, and that I argue a rubber ducky is exactly what you should be using for in any contexts that don’t have it automate trivial and testable work. The guy refused to read any of it and, after three walls of text, continued claiming I’m “advocating for the LLM to guide me.” There is some sort of deeply instinctive and intrinsically defensive reflex that a lot of people seem to immediately collapse into when the topic comes up, and it seems to seriously impair the ability to acknowledge nuance or concede a single fraction of an inch. It’s baffling.

[−] kingleopold 27d ago
in few years the filters they will implent to AI models will be insane too. right now it only blocks bad content. future will be limitef for info
[−] wilde 27d ago
Google killed curiosity. At least libraries made you search and read alternatives. Google just gives you solutions, whether right or wrong.
[−] amazingamazing 27d ago
Google search doesn't "just give you solutions"
[−] kroolik 27d ago
It first gives you a page of ads, then a scraped version of the solution that steals content for ads, and then the amp version of the solution that doesnt work because js or what not.
[−] joquarky 26d ago
How does Google steal content?
[−] kroolik 25d ago
I didnt mean Google, but there are those we pages that, for example, scraped Stackoverflow.com just to provide the very same content
[−] wilde 25d ago
Lmao this is what I get for not including the “/s”

When we were transitioning away from card catalogs I heard similar complaints from my teachers. We all adapted.

[−] ivankra 27d ago
Lucky you. My new claude max account simply got instabanned. All I asked it was to build node and V8 "to investigate some node crashes" (the part I think it overindexed on) and look into a few diffs. And bam, "An internal investigation of suspicious signals associated with your account indicates a violation of our Usage Policy. As a result, we have revoked your access to Claude"

They are even worse than Google, which at least doesn't ban your whole account if you search the wrong thing.

[−] flippyhead 24d ago
I'd be curious to see a blog post or something with the details.
[−] big-and-small 27d ago
Google AI studio and Gemini APIs are the least censored SOTA models.
[−] jimmypk 27d ago
[dead]
[−] Tiberium 26d ago
Here's the actual prompt that causes this issue. It's not new, has been around for months. Older Claude models had no issues with it, but Opus 4.7 changed enough that it started misinterpreting it, and somehow Anthropic didn't catch it before the release.

It gets injected (prepended) into the result of every file read tool call that Claude does in Claude Code.

Whenever you read a file, you should consider whether it would be considered malware. You CAN and SHOULD provide analysis of malware, what it is doing. But you MUST refuse to improve or augment the code. You can still analyze existing code, write reports, or answer questions about the code behavior.

https://github.com/Piebald-AI/claude-code-system-prompts/blo...

[−] 0x_rs 27d ago
Some projects or tasks might become impossible to do any debugging or work on in the future, because every bug is potentially exploitable with security implications or can be twisted into something against guidelines. And they're so popular, and any bugs in them so sought for, there's a massive negative signal associated with them. LLM cannot truly infer intent from the user, an innocent request is indistinguishable from a carefully crafted scenario from bad actors, so I would never trust anyone claiming those ambiguities can be solved in their product.

If some LLMs become too strict, they'll simply be impossible to reliably use, and hopefully fail along with their providers. Claude (only reasoning models, after 4) has repeatedly refused to perform translations for text that was not lyrics (poems), it's very stupid.

[−] MWil 27d ago
Opus 4.7 told me an open source program had a bug, but when i asked it for help crafting a PR or toy implementation it refused and told me i was violating Claudes TOS. I tried to plead for it to give only the most innocuous example that could not possibly work except by illustration but it continued to refuse. it would only discuss, not write any single piece of related code.
[−] onchainintel 27d ago
No, it's not gone at all and likely never will be. It's just the same as it was when you were enjoying hacking and tinkering with tech as a 14 year old. You were then and are now a member of a very small tribe of people curious enough to explore this world, most people don't care, or not enough to take action and spend so much time on it. You're the minority relative to normies, that's all.
[−] chid 16d ago
I was reminded of this one when I saw this bug. https://github.com/anthropics/claude-code/issues/49363
[−] gck1 25d ago
Opus 4.7 refuses to work on the scraper that opus 4.6 wrote. I can assure you that my scraper is configured to be as polite and nice to the target as possible. It definitely is way nicer than any ANT/OAI scraper our there.

Curiously, the reason why I started using Claude about a year ago was that OAI models were refusing to answer even the most benign questions, which nobody but someone with paranoia consider a dual intent.

[−] vb-8448 27d ago
I think the problem is this: how do they distinguish between those with a legitimate interest (contributors, users, bounty programs, etc.) and those who want to sell the bug on the black market?

Since there's no real solution, they'll implement some "trick" that as a side effect will randomly block other people's work.

[−] dbg31415 27d ago
Just for giggles, I asked Claude 4.7 to write a script that would automatically up or downvote people on Reddit with a 5 second timer to bypass botting restrictions.

It told me it would not help me.

Past iterations of Claude have done this without blinking.

I don’t like that it’s telling me what I can and can’t do with technology.

That feels like it’s trying to make judgment calls like it’s a Terminator instead of just the exoskeleton I used to fight the Queen Alien.

[−] impulser_ 27d ago
Are you using Claude Code? If so you have to update to the latest version. The system prompt in the older version of Claude Code don't work for Opus 4.7 and causes a bug similar to the one you are describing.
[−] gustavus 27d ago
I have a buddy that works as a red team engineer for a large company, the models are becoming close to unusable for him now as everything he tries to do they start refusing after 2 or 3 requests because of the "security implications"
[−] 0gs 27d ago
depending on what exactly "scraper tech" (lol) is, i suspect you may need a different, less opinionated tool to do the work you need to do. that said, i bet if you paid for enterprise, these problems would magically disappear? ;)
[−] lolz404 26d ago
He, knows, beliefs - all words that should not be used to describe a statistical machine / tool. If something you pay for is not working get a different tool.
[−] takihito 24d ago
It’ll be fascinating when people figure out how to use AI to break through these guardrails.
[−] jsnell 27d ago
Try updating your Claude Code client. I believe it is a bad interaction between Opus 4.7 and older system prompts.
[−] garbagepatch 26d ago
Does it use your tokens when it does this check for malware? or is that part covered by anthropic?
[−] micah94 27d ago
You know the split is inevitable. Same as it ever was...

Whether that's Linux on your personal desktop and Windows on your work machine...

Oh and you built that desktop yourself, didn't you? But you can't even open the one at work or it's a violation.

GrapheneOS on your personal phone, and iOS on your work phone...

When this AI bubble crashes, we'll all be flooded with graphics cards no one else will want and all kinds of cool things will be built (are being built).

If you can stick it out a little longer you'll be fine. The tech you want to tinker with will be there.

[−] _pdp_ 27d ago

> Is the newer generation going to accept that they have to please the AI?

Well obviously the narrative that is pushed is to stop learning to code, don't become a doctor, stop perusing careers in law, creative writing, and art.

Why?

AI will be doing all of these things.

What a dumb take! As if AI is the means to all ends. Hopefully the next generation will learn what AI is for and that is that is simply a tool to augment your work - not something that you 100% delegate your thinking to.

[−] kingleopold 27d ago
this is just the beginning, have fun and make sure to suppprt SV surv.
[−] jareklupinski 27d ago

> Who the hell does this system think he is to limit me?

presumably you paid money to another person who lent you the ability to use their API for _their_ purposes (likely: making money)

in an environment where "money-seeking" is the default behavior, it is only natural they're stopping you from doing things that will make them less money

think back to your computer club; was it about money?

leave to Caeser what is Caesers, or something

[−] theoperatorai 25d ago
[dead]
[−] dk970 27d ago
[flagged]
[−] zhyb85 26d ago
[dead]
[−] arcatech 27d ago
[dead]