fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

8.7K
active users

#troubleshooting

7 posts7 participants0 posts today

something is eating my RAM and it's pretty stealthy.
My used memory is about double of what it should be according to process usage. This is with #linux 6.16, gnome, browser and mailer running.

If I start a VM with 16 GB allocation, it runs into #OOM pretty quickly. If I start said VM with 12 GB, it also OOMs, just slower. Stealth usage increases there otherwise it wouldn't run OOM.

Found this promising thread but it's not this unix.stackexchange.com/questio

any idea?

Hey 3d printing friendos! I'm having some issues with resin printing, and curious if fedi would be able to assist!

So my Mars Pro seems to be in mostly working order. I can print minis with great detail still, and seem to get no failures when auto-generating "medium" sized supports with Lychee. That said, I've been trying to use provided pre-supported files, in hope of nicer details from smaller support sizes.

So, the problem -

On some surfaces, I'm noticing that a chunk of the model almost gets "torn away" from the supports during printing. It's happened on a few different models, so i'm more inclined to think a setting or parameter change on the printer might help?

Does this sort of thing look familiar to anyone? Are there any obvious settings on my Printer that I should try changing?

(Thanks for taking the time and reading, peeps. <3 )

Oof, I have no idea where to go next investigating this problem. I have a smb/cifs network drive that suddenly refuses to enter two folders (but all the other folders on the drive are fine). It's possibly because the folder are large/have lots of items, but I'm not sure. Searching doesn't turn up anything (even with Kagi).

It's weird that just these two folder suddenly hang whenever I try to enter them, either at the command line or via the GUI file manager.

I'm stumped. Stumped, I say.

О «залипании» процесса checkpoint и archive_timeout в Postgres

Добрый день, коллеги! Недавно мы столкнулись со следующей проблемой при тестировании СУБД PostgresPro под высокой нагрузкой: процесс представлял собой массированную многопоточную заливку данных на протяжении многих часов,а данных было около 20 ТБ, потоков — 75. В процессе загрузки наблюдалось следующее явление: через некоторое время процесс checkpointer переставал делать контрольные точки в зависимости от других параметров БД либо сразу, либо через 2-3 часа.

habr.com/ru/companies/gnivc/ar

ХабрО «залипании» процесса checkpoint и archive_timeout в PostgresДобрый день, коллеги! Недавно мы столкнулись со следующей проблемой при тестировании СУБД PostgresPro под высокой нагрузкой: процесс представлял собой массированную многопоточную заливку данных на...

🚀🎩 Ah, the magical realm of #Next.js, where "oopsies" are as mysterious as the Bermuda Triangle and logging is as transparent as a brick wall. 🌪️🧙‍♂️ Our valiant blogger finally overcame the Herculean task of starting a blog—only to discover that Next.js might require a PhD in divination for basic #troubleshooting. 📜🔍
blog.meca.sh/3lxoty3shjc2z #Blogging #Magic #Oopsies #TechHumor #HackerNews #ngated

blog.meca.shNext.js Is Infuriating - Dominik's Blog
Continued thread

Good news: no complicated etcd cluster needed. I figured out a way to get it to work with Technitium as my upstream server for *.k8s-dr.home. This replaces the excoredns pod I was running before, which handled such requests. I did have to setup a TSIG key and let external-dns do zone transfers, but that all works out anyways.

I also re-learned that I should be deploying the nginx ingress controller that's for the cloud, because the bare metal one (I assume) thinks you have some external load balancer. It was actually picking up one of the k8s node's IP address instead of something from the load balancer pool. Changing back to the cloud version made it work with the MetalLB IP address pool, and that's working.

With all of this homelab work lately, I should be able to get at least a couple of blog posts out of it. One about the new hardware for the lab, another for the new k8s setup and the fun of setting that up.

Ansible really saved my sanity through this whole process. I was able to recreate the cluster on demand in only a few minutes, including cloning templates, configuring them, and bootstrapping a 5-node cluster.