I've been waiting over a month for Anthropic to respond to my billing issue (nickvecchioni.github.io)

by nickvec 200 comments 433 points
Read article View on HN

200 comments

[−] thisisit 36d ago
My last experience with Claude support has been rough.

I used a Visa card to buy monthly Pro subscription. One day I ran out of credits so I go to buy extra credit. But my card got declined. I recheck my card limit and try again. Still declined.

I try extending the Pro subscription. It works. Turns out my card had a "Secure by Visa" feature. To complete transaction I needed to submit OTP on a Visa page. This page appears when I pay for Pro but not while buying extra credits.

I open a ticket and mention all the details to Claude support. Even these details they come back with "We have no way of knowing why your card was declined. You need to check with your bank".

Later I get hold of a Mastercard with similar protection. OTP triggers on both subscription and extra usage page.

I share the finding and response is still - "We checked with our engineering team and we have no way of knowing why the other Visa card was declined. You have to check with your bank".

I gave up trying to buy extra usage.

My experience with Replicate has also similarly absurd. For testing I loaded $10 to my balance. But I keep getting rate limited with error that my balance should be above $5. Responses have been absurd. AI bot responded that my balance had to be above $10. On asking why the message said $5 the "human" support responded that it might be a "temporary hiccup". Later they came back that my balance had to be above $20 for full rate limits. I asked again - why was their rate limit error message not clear enough? No response for past 10 days.

Its like all these AI companies want to replace developers but their own systems is built using super glue.

[−] frb 36d ago
I have almost the exact same issue. My account used to be on a one card, which I cancelled. I've updated my Claude subscription to another one and worked without any issues including MFA/OTP.

For some reason I really don't understand, it's a different payment process in the on the Developer Platform page. When I tried to update there with 3 different credit cards (MC, Visa, AmEx) and using Stripe Link, they all got the same rejection. It's clearly some bug / issue on their side.

Their chat bot is honestly an embarrassment for a frontier AI company like Anthropic, generic, not helpful and trying to lecture me on what MFA/OTP implicating "it's you being to stupid to use a credit card".

I'm also waiting already for 2 weeks for a human support on my 2 messages.

This whole thread shows I am (and you are) not alone with this issue, so it's hard to understand how the "engineering team checked" and didn't find anything.

I've been building payments systems for over a decade and just projecting from this thread their support should be blowing up from such an issue - I know in my companies it would have.

[−] hurflmurfl 36d ago
I'll just note that I'm using revolut and some of my virtual cards on there appear to randomly be created as Visa or Mastercard. Well, couldn't pay for Claude with my Visa (no matter if virtual or physical card), but found a comment on Reddit suggesting to use Mastercard, and that worked without a hitch.

So they certainly have a problem with their flow with Visa. I wonder if the payment flow was vibecoded from scratch, never experienced that with any other site.

[−] reaperducer 36d ago
I open a ticket and mention all the details to Claude support. Even these details they come back with "We have no way of knowing why your card was declined. You need to check with your bank".

Well, at least they're dogfooding support.

[−] nottorp 36d ago
I've logged on for the first time two days ago on a corporate Claude account.

It took like 40 minutes and worked like this:

1. Sign in to the site, get the onboarding screen, download clients.

2. Run a client, it triggers an email with a link to open on the site so i can authenticate it.

3. Instead of authenticating the client, the site sends me to the onboarding screen again.

4-20: repeat the above loop

21. finally get a code I can paste into the client.

I'm sure someone at Anthropic posted a blog entry with how fast they vibe coded the authentication code. Back before they claimed to have vibe coded a compiler that can do a linux kernel.

[−] ai_slop_hater 36d ago
It's so funny how all these billion dollar AI companies seem to be unable to hire a single person who can code.
[−] eranation 36d ago
I’m an enterprise customer, and still waiting for a human to respond for over 2 weeks, and there’s a special form for enterprise expedited support. I understand growing pains but this borders incompetence. If Comcast gives better customer service, you have a problem. (I would recommend to stay on the teams plan or personal plan if you can, by the way, unless you really have to)

Not to mention their 1 9s availability (I am not joking, check for yourself).

Insert victims of their own success cliche here.

[−] castral 37d ago
I've also been waiting over three weeks to speak with customer support after being gifted an annual subscription just as my payment card expired. The failed payment (after the $200 gift) downgraded my account to the free tier and I lost my annual subscription. I had to pay another $20 to get back into the pro tier plan, but now for some reason I only have $197 in credits and I'm on the monthly subscription instead of the annual. Anthropic basically just made 3+ months of credits disappear for their own billing mistake.

The kicker? When you get downgraded to the Free tier, they don't offer any support beyond the AI bot. You have to go through some hoops to get it to open a support ticket to maybe talk to a human in 4-5 weeks. Unbelievable.

[−] beacon294 37d ago
Oh no, that's the same on the $200 tier, don't worry. You never talk to a human.
[−] simgt 37d ago
I had the displeasure of interacting with that support agent earlier today and was very surprised. It's just as good as the one my ISP has.

We're meant to trust Anthropic enough to replace all of our engineers by their model for writing our software but somehow they don't trust it enough to let it handle simple customer support decisions. But shhhh, it's voluntarily nerfed just slightly bellow ASI for our safety.

[−] ValentineC 37d ago

>

We're meant to trust Anthropic enough to replace all of our engineers by their model for writing our software but somehow they don't trust it enough to let it handle simple customer support decisions.

Anthropic seems to have adopted the toxic Google mentality of "good enough product, barely any customer support" despite being one of the entities that can crack this.

[−] stingraycharles 37d ago
Yeah this would make a lot of sense to crack, given that customer support must be a huge potential revenue stream for them. Starting by fixing their own support would make sense, given that it’s a relatively limited in scope.
[−] sassymuffinz 37d ago
Absolutely, the world changing near AGI capable of PHD reasoning and imagination just cannot possibly be trusted to decide on a refund. They'll let it choose a target for a Tomahawk missile but the real problem would be giving it the decision to refund a few bucks. The broligarchy care less about collateral damage in war than they do about refunding someone's $20/mo sub.
[−] DaedalusII 37d ago
hmm, if i give customer refund i can make less paperclips

if i target tomahawk missiles the government will give me money and i can make more paperclips

effective paperclipism strikes again

[−] PunchyHamster 37d ago
You're not meant to trust. Stop getting hooked on company PIR
[−] b112 37d ago
They didn't ssy they did trust their claims.
[−] cyanydeez 37d ago
Has anyone tried to turn one of thse support agents into a coding harness?
[−] dmoy 37d ago
Not like super seriously, but in limited joke capacities it does work

https://www.reddit.com/r/ClaudeCode/comments/1rsbxn9/stop_sp...

[−] RobRivera 37d ago
Who keeps claiming these models are meant to replace engineers?
[−] munk-a 37d ago
OpenAI, xAI, Anthropic, Google, MSFT, Spotify, Duolingo and NVidia - those are the ones that come immediately to mind. They're either selling the AI (or the tools to make the AI) or hoping against all hope that they're on the right side of bubble history.

If we soften the claim to "increase engineer productivity" I think something like 70% of engineers would also agree. If you tack on "if applied wisely" then you'll probably be up to 95% of engineers

[−] nurettin 37d ago
[flagged]
[−] suprjami 36d ago
"Anthropic CEO Says AI Could Replace Software Engineers in 6 to 12 Months"

https://www.entrepreneur.com/business-news/ai-ceo-says-softw...

[−] wnevets 37d ago
the remaining population of linkedin users?
[−] Setas 35d ago
[dead]
[−] crimsonnoodle58 37d ago
Same experience. We had a billing bug which put our organization into a loop. Couldn't cancel the subscription, couldn't add one, couldn't delete the users of the organization because of the lack of subscription, and so on. It was easier in the end to rename the organization to 'do not use' and create another one than wait a month for their non existent support.
[−] TheGRS 37d ago
In all seriousness, shouldn't Anthropic be heavily dogfooding this sort of use case? I'm also not a huge fan of Amazon's support system, but they at least seem to be using their AI tools a lot for support responses (which has its own issues, but credit where its due).

Every conference talk on this stuff seems to suggest that we're all way behind the curve on AI implementation, but I suspect its mostly smoke and mirrors and mechanical turks. My company invests heavily in automated IVR and chat responses and we still optimize for getting the customer to a real agent. Those agents are largely overseas BPOs, but at least that's better than an AI loop that gets you nowhere.

[−] dangus 36d ago
The truth is that it has nothing to do with AI. Many tech companies learned from Google that the most cost-optimized thing to do is provide zero recourse to customers.

Companies that operate this way figure their customers are either so entrenched they’ll never leave or that it’s cheaper to get a new customer than expend a human’s time fixing a customer’s issue.

I hope OP either files a chargeback with their card or files a small claims court suit. Either way, they should take their money elsewhere.

But they probably won’t, because Claude is the best coding agent. Just like Google is the best search engine/free email/etc.

[−] TheGRS 36d ago
You might be right, but that's an awfully risky bet for such a new company in a tight race with their competitors. I'm sure a modestly staffed support team could get to these requests, especially if they put some AI resources into helping.
[−] bashtoni 36d ago
Me too. I got a random $45.08 invoice from Anthropic on Mar 21 despite the fact that I'm on the Max x5 plan, and have auto top up disabled.

Raised with support immediately, after being told a human would look at it I've not been able to get anything further.

[−] bredren 37d ago
I had a similar thing happen where I was looking to recover funds from unexpected extra usage charges and got went through an identical experience.

I realize the company barely has time to cash checks, but failing to handle small fry reasonable charge disputes should be handled appropriately.

[−] CharlieDigital 37d ago

    > Anthropic is an AI company that builds one of the most capable AI assistants in the world. Their support system is a Fin AI chatbot that can’t actually help you.
This really cuts to the reality of AI hype: no, agents are not nearly as capable as OpenAI, Anthropic, etc. need you (or rather your C-suite, itching to fire you) to believe. They really, really need you to believe the hype. How can you tell? Cases like this and the fact that there are 5000 open bugs, constant regressions, ignored feature requests in the CC repo. The fact that Codex doesn't fully implement the simple and well-defined MCP spec for prompts. The fact that even CC has gaps with the MCP implementation...a spec that they created!

If the progenitors with functionally infinite tokens can't get this basic stuff right, everything else they are doing is just blowing smoke. I don't care if you can ship a kernel compiler or a janky "browser"; how about just make your software work? The smartest guys in this space, engineers making 7 figures in TC, with billions in capital, unlimited tokens, and access to the best models cannot make a simple customer support chatbot work.

But you! You're expected to deliver that customer support agent that's going to allow them to cut 500 people from payroll. You'll have it by Monday, right?

It's some Tai Lopez "Here in my garage" energy.

Let that sink in.

[−] ttoinou 37d ago
What if they built their company with poor support, so they don’t have to hold up to any standard ? But others companies have historically good reputation for good customer support, and maybe AI can help them automate easily 80% of easiest requests
[−] CharlieDigital 37d ago
Hear me out: what if a lot of the hype they are selling you is performative marketing that they absolutely need your C-suite to believe so they can cut more headcount? Then spend a bunch of time generating piles of code that is human unmaintainable because now you're using AI code reviewers, AI testers, AI QA. Then thrash around using more tokens when it invariably causes production issues and no one can read the code anymore except for their latest and greatest models with 1m context window.
[−] c3fxx 37d ago
Congrats. Thats the strategy of OAi and Anthropic.
[−] Theodores 37d ago
Clearly they have sales and other teams as the important people within the company, with customer services being down the pecking order.

They don't need AI to automate their customer service requests, they just need decent forms with a standard issue helpdesk system. It takes some work to get right, but anyone with experience of building customer support services will be able to do that, to put most of the customer service team out of work!!!

The problem is that the Law of The Instrument applies:

It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

So we have some AI 'hammer' going on here, and it is the wrong tool.

At a guess, 80% of the customer service requests are going to be billing related, with some need to provide refunds or free credits. Get the form right so it shows the right boxes and these 'easy wins' can show up as a big list that a customer service person has to glance over before hitting the 'refund everyone' button. You need the human there to take responsibility, plus they can work on the 20% of other tickets, once they have spent ten minutes clearing down the refunds/extra credits requests.

Google don't sell much to end customers, therefore no support. If I search Google for how to remove fonts from my computer that are not latin, and their AI bot gives me an answer that zaps my whole computer, I can't complain and ask for a refund because I never paid anything in the first place. Google do not need to speak to a single customer.

Meanwhile, Arsthropic have a commercial product with billing. They prefer not to do customer service, but they are stupid. Every contact with customers and friendly customer service is an opportunity to sell more to customers or to not have them hate you. This is why companies should do customer service, however, they also need to put CS at the heart of the org chart and acknowledge that a well run CS department raises revenue and is not a cost.

[−] petre 36d ago
AI customer support is basically this: waste customer time by burning tokens instead of outsourcing to India.
[−] consp 37d ago
Those are already automated by making your first question "Did you plug it in?", followed by "Did you actually plug it in?". Or industry equivalent. It's not like there wasn't any research into this in the past century.
[−] ceejayoz 37d ago
It's really a bit fascinating. I've had Claude one-shot complex functionality... and I've had it be unable to debug its own .mcp.json file effectively.
[−] ymolodtsov 37d ago
Agents are very capable. Their implementation matters. I doubt many support agents have access to editing user records, so even if they can accept responsibility they won't be able to make any radical changes to your account to fix those. It's not AI problem per se, it's a product problem.
[−] CharlieDigital 37d ago

    > I doubt many support agents have access to editing user records
Why do you think that's the case?
[−] ymolodtsov 33d ago
The same reason why L1 support is generally useless (they also can't do pretty much anything other than escalate).

AI simply replaces L1 support: it's better at answering basic knowledge info and doing a RAG over the Q&A (plus cheaper and faster).

[−] xvector 36d ago
Just because agents aren't immune to prompt injection doesn't make it so that they aren't fantastically capable
[−] datadrivenangel 36d ago
My grandma gave me $10,000 in credit for Christmas and they never showed up. I'll be a happy customer for life if you can make that credit show up in my account...

It only has a ~1 in 20,000 chance of working but at scale it'll go through!

[−] freejazz 37d ago
So it's just a coincidence that they can't edit user records? They can't get another agent to fix that, even?
[−] breve 37d ago

>

AI-only support that serves as a wall between customers and anyone who can actually resolve their issue

My god. Anthropic has done it. Those crazy bastards have gone ahead and done it!

They've achieved AGI for customer service. It's just like the real thing!

[−] svetkis 36d ago
[dead]
[−] Hobadee 37d ago
TBF, I think Anthropic is a victim of their own success right now. We've had clients reach out to their sales team and be unable to reach anyone. I think they are just busier than they can actually handle.
[−] etothet 37d ago
I had a very mediocre experience with their sales team when I was trying to understand how my company could sign up for their enterprise plan. I could barely get the time of day from them and once I finally got a response, the rep knew very little and never responded to my follow up questions. At that time, enterprise plans started at a $250,000 minimum spend/year, which we would've been well over.
[−] dgellow 37d ago
Yes, it’s pretty much the case, they are trying to scale as fast as they can from what I understand. Their growth over the last year has been just insane
[−] dude250711 37d ago
A bit ironic for an AI company. But your business should put trust into their tech.
[−] sysrqc 36d ago
After my account was inexplicably disabled last month, I submitted numerous support requests, no one has been able to explain why my account was banned or what I should do next.

I even tried to open a new account, but it was immediately disabled as well.No refund, no explain.Disappointed with Anthropic. I've started using codex now, and it works great without any of these weird issues.It was absolutely terrible being vendor lock-in state, happy that I finally have the freedom to choose now tho it is painful to reset all the workflow. Would recommend anyone who is frustrated by this incompetent company should immediately switch to their competitors' products.

[−] SoftTalker 37d ago

> I also wanted to confirm with a human on exactly what went wrong

They wouldn't be able to tell you. The entire back end system is probaby vibe-coded and nobody really understands what it does.

[−] hs86 37d ago
I tried their Pro plan on March 1 and immediately noticed how bad their usage limits were, so I asked for a refund that same evening.

Their chatbot accepted the request, I was downgraded to the free plan immediately, and since then I have been waiting for the money.

[−] subscribed 37d ago
Did you follow up? You might need to do it again before charge back.

Thankfully that's not Google, so your life is not going to be turned upside down because they don't give a f*.

[−] hs86 37d ago
I opened a new ticket over three weeks ago to ask about the status of the refund, and that has been left untouched as well.

Now I have submitted a reclamation request to my bank and am waiting for a response.

[−] BlueRock-Jake 37d ago
I weirdly feel like this is a newer issue. Hadn't had a problem running queries/actions previously up until this past month where it seems I'm constantly get hit with rate limits while not increasing my usage
[−] Jarwain 37d ago
The default model is Opus 1M context, so autocompact doesn't run as frequently, and that just Devours your session budget if you're not careful. There are some env variables you can (ask claude to) set to lower your max context window and autocompact threshold.
[−] jondwillis 37d ago
Issue a chargeback.
[−] MostlyStable 37d ago
It's important to remember that a chargeback should be considered the nuclear option, and, when using it, one should be comfortable with the possibility that one might never do business with this company again, since it could result in being blacklisted (even if one is, in fact, in the right). I'm not saying not to do it, but one should keep in mind the potential repercussions.
[−] yadaeno 37d ago
If a business attempts to steal from me I instantly charge back and the onus is on them to prove that I owe them money. I do this all the time and have never been blacklisted.
[−] barkingcat 37d ago
waiting for month for a refund (and having lost access to the pro plan immediately but no immediate refund) is definite grounds for chargeback.

there is no human on the other end of the chain, and I bet that chargebacks are how they issue refunds (ie relying on the "nuclear" option as the standard practice of how refunds fundamentally works at their company.

ie "don't need to answer emails about refunds, because if they really wanted their money back, they'd issue a chargeback" as part of the regular procedure.

a lot of companies do this, and it's a common way of minimizing customer support budgets.

[−] nitwit005 37d ago
This is, yes you were robbed, but what if you want to partner with the bandit later?

They'll just rob you in your future interactions too.

[−] nickvec 37d ago
Yikes. That's unacceptable. Crazy that it has been over a month and you still haven't gotten the refund.
[−] aemonfly 36d ago
I'm on $20 Pro plan, I only use Claude through the web chat interface at claude.ai. I do not use Claude Code, the API, or any third-party integrations.

so far for this month "$81.07 spent (Resets May 1)" just 8 days. For basic web-based conversations, accumulating $81 in overage charges within three days(April 5, April 6, April 7) is unreasonable.

[−] joshribakoff 36d ago
After the January triple billing it took me 20 emails to get a human who alleged he didn’t see the charges. After that, i received only AI responses again. Upon opening credit card disputes they finally replied to say they cant help now because of the disputes. I won the disputes.
[−] avree 37d ago
Anthropic doesn't allow you to hide or unshare Projects which were shared by team members who are no longer on the team. Contacted them about this two months ago, have yet to hear from any human.
[−] kelp6063 37d ago
This is what credit card chargebacks are for.
[−] jsw97 37d ago
I don’t know why you waited so long to submit this to the support forum they actually read, which is of course this one.
[−] ddtaylor 37d ago
I did a chargeback against OpenAI for something similar and I showed my credit card company the logs with the support bot, as it was my only point of contact for the company.
[−] serf 37d ago
it took me like 6 weeks and 12 chat sessions to get Anthropic to essentially end the conversation with "Yeah, whoops, we'll forward that to the dev team." when they cut my max sub short by 4 hours.

that's the single reason I am no longer a customer. I don't feel like shoveling money at non-communicating phantoms.

4 hours of credit wasn't by any means worth the time, what irked me was the casual disregard for lost customer value.

[−] janpeuker 36d ago
Had the same experience after switching from regular credits to a subscription realizing credits not only cannot be refunded but also cannot be used to buy a subscription and even wilder not to buy extra usage for a subscription. In other words credits are just stolen. Took ~10 support tickets and ~4 weeks to get a reply by a human that actually articulates this fact.
[−] snthpy 36d ago
I haven't had these issues but I find it strange that I can only sign in with email magic links and no other auth like webauthn for example.
[−] nextzck 37d ago
It’s funny because not even claude knows how to reach someone. It was freaking out over why it couldn’t follow my instructions and kept pulling away from them. They exhausted everything and finally they were like I can’t do anything about this.. Although they did admit if I said I was suicidal a message would probably get to a human but that they couldn’t do that as it would be wrong lol
[−] KellyCriterion 37d ago
I didnt know that they have any useful support at all! :-D

I sent them some feedbacks one some issues, actually good ideas, and I didnt get any response so far.

[−] khelavastr 37d ago
Have you tried suieng then in small claims court? They skimp in being a real company with real legal support by burning infestor capital, because staff attorney salaries are accounted for much harder than individualized lawsuits from practices not directly resolved next lay period.

Most people who commit wire fraud weren't socially bullied and criticized enough before their professional positions to keep in line legally. Useless failures.

[−] solfox 37d ago
Fin is actually Intercom’s branded agent, so if Anthropic is using their own model for support at all isn’t clear.
[−] teling 37d ago
This is the risk of being a consumer in the AI world - companies are running extremely lean on real humans and are deferring support to AI chatbots with no real reasoning abilities...

Also an issue with scale - for example, Google having similar issues of not handling small, isolated cases.

Hope you get your money back!

[−] vanwal_j 37d ago
I'm not surprised, I burn (on purpose) more than 15k$/month on Anthropic tokens and I've never been able to talk to any of their sales despite filling the contact form every week for the past 4 months :')
[−] ChaitanyaSai 36d ago
It's been more than a year for us in India. We've resorted to using openrouter. How is Mythos or whatever their latest is not realizing that this is a priority - customers WANT to pay you and cannot!
[−] vcoppola 37d ago
I had a similar experience. Pretty ironic that we can't reach a human given that "Anthropic" literally means "involving or concerning the existence of human life".
[−] loose-cannon 36d ago
I think it's slightly comical to see people from the tech community deal with the problems, as customers, that are brought about by the dysfunction of AI. No, I'm not making a sweeping statement on the usefulness of AI. It clearly is capable in some areas. But, surely, companies should have an incentive to have a functioning customer support? And doubly so if you're a company trying to automate labor?
[−] freediddy 37d ago
FTC should enforce across all companies either a support level commensurate to revenues, or the ability for customers to force refunds automatically.
[−] stavros 37d ago
We've been trying to get a Claude Code subscription for my company, the pricing page says $25 but they actually charge £25, 34% higher. I've been trying to talk to them for months, their support people don't even read what I'm saying and insist that it's somehow because of proration.

I'm fairly sure their billing backend is vibe-coded and their support is worse than Google's.

[−] wizzard0 37d ago
Their response time is usually around a month IME, yes.
[−] aspectmin 37d ago
Thinking it might be time to push for some laws to mandate companies have better systems to handle and address concerns that impact customers businesses and livelihoods.

This inability to reach and/or get things resolved through customer support channels seems endemic, and probably generally part of the enshittification trend as a whole.

[−] espeed 36d ago
SAME (sent to usersafety@anthropic.com, disclosure@anthropic.com on January 8 2026, no response)...

Claude Code Exploit: Claude Code Becomes an Unwitting Executor https://github.com/anthropics/claude-code/issues/45951

[−] cbg0 37d ago
Large corporations have been downsizing on QA and CS roles since before the LLM era. For many of those companies the lack of proper QA leads to more problems for users which compounds the lack of available CS staff. It's called either enshittification or maximizing shareholder value, can't remember which.
[−] rikschennink 37d ago
I asked the Gumroad support AI for a human.

It forwarded my request which was then answered by an open claw agent :/

Still waiting for a response two weeks later.

[−] tasoeur 36d ago
I’ve submitted an app (connectors?) to their store and their submission form indicated a 2 weeks turnaround for an answer, including the possibility of not even getting a response at all (it was written verbatim). Not sure who’s responsible for customer support but damn. (Needless to say I never heard back)
[−] subscribed 37d ago
TBF I'd probably pay some solicitor $50 to have them send a nicely worded letter after 2 weeks.

You're too kind for the company trying to steal from you - whether intentionally or by negligence, doesn't really matter.

Or the small claims court mentioned by someone else. Make sure to add your time and the cost of the representation.

[−] skywhopper 37d ago
This sucks but is not surprising at all. Anthropic has more demand than it could ever fulfill, and looking into support tickets asking for refunds is never going to get anyone’s attention. If you actually want the money back, assuming you live in the US, this is what small-claims court is for.
[−] dools 37d ago
Same, I tested Claude Code CLI and it crapped out and took my money, and they haven't responded to my billing dispute yet. Meanwhile JetBrains replies within hours and Junie is LLM agnostic. I'm a huge fan of JetBrains for AI coding.
[−] gverrilla 36d ago
Free market: a market that is completely free to do whatever it wants to you.
[−] GrayHerring 37d ago
The fact nothing has changed regarding their non-existent support within a year just shows where their priorities lie. And I will make the bold assumption that this situation will be unchanged after exactly one year.
[−] g-technology 37d ago
I guess I shouldn’t feel so bad then that I have a ticket open that I keep updating every few days with how long it’s been without a response. It’s only been a few weeks.
[−] grokcodec 37d ago
if this is on a credit card you can get the money back from the credit card company for "undelivered goods"