fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

8.8K
active users

#valgrind

0 posts0 participants0 posts today
mjw<p><a href="https://mastodon.nl/tags/Valgrind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Valgrind</span></a> <a href="https://valgrind.org" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">valgrind.org</span><span class="invisible"></span></a> not only tweaks your instruction stream it also wraps all your system calls so it knows all (memory) manipulation that goes on in your program (and to hide itself from the program it is running inside).</p><p>We integrated the <a href="https://mastodon.nl/tags/LTP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LTP</span></a> <a href="https://linux-test-project.readthedocs.io" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">linux-test-project.readthedocs</span><span class="invisible">.io</span></a> syscalls testsuite with make ltpchecks</p><p>We are now down to ~50 failures <a href="https://builder.sourceware.org/testrun/fca661c30befd4e15b1b28e254a20c4d6420826a?amtestdir=ltp%2Ftests&amp;filter_res=FAIL&amp;perPage=50" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">builder.sourceware.org/testrun</span><span class="invisible">/fca661c30befd4e15b1b28e254a20c4d6420826a?amtestdir=ltp%2Ftests&amp;filter_res=FAIL&amp;perPage=50</span></a></p><p>All tracked in the meta bug dependency tree <a href="https://bugs.kde.org/showdependencytree.cgi?id=506971" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">bugs.kde.org/showdependencytre</span><span class="invisible">e.cgi?id=506971</span></a> </p><p>Lets bring that count down to zarro boogs!</p>

Next #swad release will still be a while. 😞

I *thought* I had the version with multiple #reactor #eventloop threads and quite some #lockfree stuff using #atomics finally crash free. I found that, while #valgrind doesn't help much, #clang's #thread #sanitizer is a very helpful debugging tool.

But I tested without #TLS (to be able to handle "massive load" which seemed necessary to trigger some of the more obscure data races). Also without the credential checkers that use child processes. Now I deployed the current state to my prod environment ... and saw a crash there (only after running a load test).

So, back to debugging. I hope the difference is not #TLS. This just doesn't work (for whatever reason) when enabling the address sanitizer, but I didn't check the thread sanitizer yet...

TIL: calling munmap() with wrong parameters can cause really strange occasional crashes in #AddressSanitizer or #Valgrind. Wrong parameters meaning in this case:
- passing wrong (too large) size,
- passing NULL,
- calling munmap() multiple times for the same pointer.

Debugging the crashes was painful. What helped in the end was "doing the boring right thing": adding error checking to all related system calls, adding debug logs, fixing code smells.

I just stress-tested the current dev state of #swad on #Linux. The first attempt failed miserably, got a lot of errors accepting a connection. Well, this lead to another little improvement, I added another static method to my logging interface that mimics #perror: Also print the description of the system errno. With that in place, I could see the issue was "too many open files". Checking #ulimit -n gave me 1024. Seriously? 🤯 On my #FreeBSD machine, as a regular user, it's 226755. Ok, bumped that up to 8192 and then the stress test ran through without issues.

On a side note, this also made creating new timers (using #timerfd on Linux) fail, which ultimately made swad crash. I have to redesign my timer interface so that creating a timer may explicitly fail and I can react on that, aborting whatever would need that timer.

Anyways, the same test gave somewhat acceptable results: throughput of roughly 3000 req/s, response times around 500ms. Not great, but okayish, and not directly comparable because this test ran in a #bhyve vm and the requests had to pass the virtual networking.

One major issue is still the #RAM consumption. The test left swad with a resident set of > 540 MiB. I have no idea what to do about that. 😞 The code makes heavy use of "allocated objects" (every connection object with metadata and buffers, every event handler registered, every timer, and so on), so, uses the #heap a lot, but according to #valgrind, correctly frees everything. Still the resident set just keeps growing. I guess it's the classic #fragmentation issue...

Hm, is #valgrind's #helgrind useless for code using #atomic operations? Example, it complains about this:

==9505== Possible data race during read of size 4 at 0xADD57F4 by thread #14
==9505== Locks held: none
==9505== at 0x23D0F1: PSC_ThreadPool_cancel (threadpool.c:761)
[....]
==9505== This conflicts with a previous write of size 4 by thread #6
==9505== Locks held: none
==9505== at 0x23CDDE: worker (threadpool.c:373)

so, here's threadpool.c:761:

if ((pthrno = atomic_load_explicit(
&job->pthrno, memory_order_consume)) >= 0)

and here's threadpool.c:373:

atomic_store_explicit(&currentJob->pthrno, -1,
memory_order_release);

Ok, I *think* this should be fine? Do I miss something?

(screenshots for readability ...)

#c#coding#c11

Valgrind-3.25.1 is available!

valgrind.org/downloads/current

This point release contains only bug fixes. Including some regression fixes.

- Incorrect NAN-boxing for float registers in RISC-V
- close_range syscalls started failing with 3.25.0
- mount syscall param filesystemtype may be NULL
- FILE DESCRIPTORS banner shows when closing some inherited fds
- FreeBSD: missing syscall wrappers for fchroot and setcred
- Double close causes SEGV

valgrind.orgValgrind: Current ReleasesOfficial Home Page for valgrind, a suite of tools for debugging and profiling. Automatically detect memory management and threading bugs, and perform detailed profiling. The current stable version is valgrind-3.25.1.

Please help test #valgrind 3.25.0-RC1

sourceforge.net/p/valgrind/mai

Initial RISCV64/Linux support. Valgrind gdbserver supports 'x' packets. Bug fixes for Illumos. --track-fds=yes treats inherited file descriptors like stdin/out/err (0,1,2). There is --modify-fds=high. s390x support for new instructions (BPP, BPRP and NIAI). New linux syscalls supported (landlock*, open_tree, move_mount, fsopen, fsconfig, fsmount, fspick, userfaultfd). The Linux Test Project (ltp) is integrated in the testsuite.

sourceforge.net[Valgrind-developers] Valgrind-3.25.0.RC1 is available for testing | Valgrind, an open-source memory debugger
Continued thread

Nice, #threadpool overhaul done. Removed two locks (#mutex) and two condition variables, replaced by a single lock and a single #semaphore. 😎 Simplifies the overall structure a lot, and it's probably safe to assume slightly better performance in contended situations as well. And so far, #valgrind's helgrind tool doesn't find anything to complain about. 🙃

Looking at the screenshot, I should probably make #swad default to *two* threads per CPU and expose the setting in the configuration file. When some thread jobs are expected to block, having more threads than CPUs is probably better.

github.com/Zirias/poser/commit

Replied in thread

Friend @Computer I streamlined multiple levels of security by trouble shooting them. Our new security posture eliminates treacherous "defense in depth" wasted and disloyal effort. (This repeats earlier work that fixed #valgrind runs on #openssl, and of course recent similar improvements carried out by #doge_doofus)

Debuginfod project update 2024 by Aaron Merey

developers.redhat.com/articles

- Metrics and scale of debuginfod servers #prometheus
- New tools and features in debuginfod #elfutils
- IMA verification support
- Addressing kernel VDSO extraction bottlenecks @osandov
- Lazy debug info downloading in #Valgrind and #GDB

Red Hat Developer · Debuginfod project update 2024 | Red Hat DeveloperA quick review of notable updates and enhancements to debuginfod in 2024.
Continued thread

Darn it. Half a year later we are drowning in bugs again. We managed to close 67 bugs, but 99 new bugs were reported. So we are sitting on 1000+ open bugs again for #valgrind Please help...

Built my first lisp (after already having started writing two compilers xD) in #zig! Mainly to test out the language, but it was much fun anyway.

It also got a few ideas for my own language for a few functional programming things :D

I enoyied the most the memory checking zig does, be it leaks, use-after-free or double-free, which makes this project the most easy to be memory-safe out of all my native (c-like) projects whithout multiple days using #valgrind xD

#foss #oss #opensource #programming #langdev #lisp #coding #c #c++ #zig #functionalprograming #compilerdev

CodeArqsexpr-zigA small lisp implementation in zig

Just got a flashback to when I was doing some hired work for a company and got the feedback that I was 'over engineering' things and 'optimizing too much'. Apparently they never heard of the principle of 'Fixing memory leaks" since #Valgrind was a new invention to them....

It still baffles me how unaware a lot of companies are about proper software development and just 'go with it'.