fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

8.8K
active users

#geneticprogramming

0 posts0 participants0 posts today

Hello Fediverse. So I'm looking for a #remote #opensource job or project in European timezones.

I am not good in writing CVs. So I'm just listing the projects I have done:

I'm a #Linux user. I have good experience using CLI, and I have basic shell scripting skills. I also have a little experience with #FreeBSD

I am also good at reading academic papers, standards(like RFCs) and manpages.

I am up for working on #FOSS projects as freelancer or part time contracts.

Boosts appreciated :)

PS: I am also familiar with #CommonLisp. But I highly doubt if I can find a #Lisp job anywhere!

Codeberg.orgwakegpDetecting wake words using Linear Genetic Programming

So my horsepower is doing experiments for evaluating performance of deletion mutation plus my parsimony pressure method. But well, there are still free cores.

So I'm doing other experiments with those. It's 3x400 demes. The parameters are the best found in experiments. Not all parameters are finetuned, yet. As I have extended the negative dataset to 8 words, the fitness doesn't excel like before. And of course, as I have increased the population(4x), it'll take a much longer time for it to converge.

Let's see how it'll go.

Replied in thread

@jutty @stefano

I would say both ends are wrong about LLMs. Some people overpower them to a level of super hype. Some say they are nothing.

LLMs are very useful as the new generation search engine when you don't even know the query. And also they are useful for code review.

For code generation? I second Stefano's opinion:

mastodon.bsd.cafe/@stefano/114

Also there are other ways to generate code even before LLMs using #GeneticProgramming. Like I saw a paper from about a decade ago which was about evolving a sorting algorithm with it.

BSD.cafe Mastodon PortalStefano Marinelli (@stefano@bsd.cafe)Ever heard of vibe coding? It’s when the code looks fine, tests pass, vibes are good - so it goes to production. Even if it’s wide open to SQL injection. I’ve seen it happen. AI wrote it. Devs trusted it. Management loved it. Nobody understood it. We’re trading skill for speed. And that’s how we lose our freedom. Vibe Coding Will Rob Us of Our Freedom: https://it-notes.dragas.net/2025/06/05/vibe-coding-will-rob-us-of-our-freedom/ #ITNotes #ai #coding #data #ownyourdata #programming #IT #SysAdmin

Alright another round of experiments for #wakegp has ended. This round was for determining if adding the conditional SelectP instruction to the set will have any effect on both program size and fitness.

Another round of experiments which is still going on, is determining how deletion mutation affects both program size and fitness, that is prevents bloat. Earlier experiments show that the simple parsimony pressure method I invented, has positive effect on both fitness and program size, but when there is also deletion mutation, the effect is even better. In those experiments, deleting an instruction randomly from all programs(that is a rate of 1.0) has positive impact on both fitness and average program size. Now I'm experimenting to see if more than one deletion per program would be even better.

Alright finished another round of #wakegp tests. It seems my invented method for parsimony pressure is effective and also has positive impact on fitness. This confirms findings or opinions of other researchers that bloat has negative impact on fitness.

Now started another round which will end in ~11h to see the effect of deletion mutation on fitness and program size, if any. Current experiments suggest that deletion mutation has no effect on either, not this side or the other side.

PS: I just hope when I want to publish the paper, not having a bachelor won't be problematic. However I don't have much of a problem about my paper getting rejected. Publishing the paper is a side effect with benefit. Results are my goals.

#EvolutionaryML #EC #AI #ML #MachineLearning #GeneticProgramming #LinearGeneticProgramming
#EvolutionaryMachineLearning #GP #bioinspiredcomputing #BioinspiredML #academia #academic #academics

I like productivity! So I've found the best #FFT size for #Wakegp. Some experiments about the #parsimonypressure method I have done shows effectiveness of my simple method. Now I'm experimenting on that to find out the sweet spot.

Regarding instructions, I already have done experiments. And the results were unexpected. Adding functions like Sine, Cosine, Logarithm and Square root have negative impact on fitness.

There are still a lot of parameters which I need to fine tune. Mutation rates, number of demes, tournment size, reproduction rate, crossover rate and a lot more. And I also need to experiment with #LexicaseSelection

After all these and when I have runs producing accurate enough programs, I should learn how to optimize the found programs to "summarize" them. I really hope compilers like #LLVM and #GCC already do so. I have doubts but I think most likely, they already have heuristics to summarize programs while keeping the logic.

God willing, when #repebble(the new #pebble #smartwatch) comes to market, I could try to see if I can use it there. Then #repebble would be the first industry application of my research.

cc @lspector

Edit: Regarding the simple parsimony pressure method I invented, I'm pretty sure already someone has invented it but I haven't found it in the literature. Just like tons of other things I invent or discover in Math or CS and then realize it has already been discovered or invented.

#MachineLearning #EvolutionaryMachineLearning #EvolutionaryML #ML #ArtificialIntelligence #AI #wakeworddetection #wake_word_detection #hotworddetection
#GeneticProgramming #LinearGeneticProgramming #MachineLearning

Unfortunately, my horse power computer is now offline. And I am in another city so no access to it. It's likely that there has been a #poweroutage which is now very common in my country #iran

I am doing experiments with #wakegp to see if my simple method for parsimony pressure is effective. Till now, it seems that its effect is very little. I'm thinking of other methods for parsimony pressure such as bucketing and tournament selection(for size instead of fitness).

I am expecting to deliver results in summer, God willing.

After my #wake_word_detection #research has delievered fruits, I have plans to continue works in the voice domain. I would love if I could train a #TTS model which has #British accent so I would use it to practice.

I was wondering if I could do the inference on #A311D #NPU. However, as I am skimming papers of different models, having inference on A311D with reasonable performance seems unlikely. Even training of these models on my entry level #IntelArc #GPU would be painful.

Maybe I could just finetune an already existing models. I am also thinking about using #GeneticProgramming for some components of these TTS models to see if there will be better inference performance.

There are #FastSpeech2 and #SpeedySpeech which look promising. I wonder how much natural their accents will be. But they would be good starting points.

BTW, if anyone needs opensource models, I would love to work as a freelancer and have an #opensource job. Even if someone can just provide access to computation resources, that would be good.

#forhire #opensourcejob #job #hiring

Geometric Semantic #geneticprogramming was a big breakthrough in GP in 2012. The relationship between syntax and semantics is - in one way - easy to understand and take advantage of. 10 years later (!), here is the GPEM special issue.

Special issue collection: link.springer.com/collections/

Editorial introduction by Moraglio et al: link.springer.com/article/10.1

SpringerLinkSpecial Issue for the Tenth Anniversary of Geometric Semantic Genetic ProgrammingCall for Papers: https://www.springer.com/journal/10710/updates/23957712

New paper in GPEM on requirements engineering:

"RSCID: requirements selection considering interactions and dependencies", by Keyvanpour et al.

#geneticprogramming

link.springer.com/article/10.1

SpringerLinkRSCID: requirements selection considering interactions and dependencies - Genetic Programming and Evolvable MachinesRequirements selection is one of the essential aspects of requirement engineering. So far, a lot of work has been done in this field. But, it is difficult to choose the right set of software requirements, taking into account their interactions and dependencies and only a few researches have paid attention to interactions and dependencies between requirements. However, in this paper, an attempt has been made to provide a method by considering interactions and dependencies between requirements. To better manage these features, we have also improved the search-based methods used in this area. According to the proposed method called RSCID, before choosing the optimized subset of requirements, dependencies between requirements are extracted. In the next step, an algorithm is proposed based on the NSGA-II method. In this algorithm, a hybrid fitness function is introduced in addition to two other functions that are used. To tradeoff between cost and value functions, user interactions are also deployed. Another algorithm is used in this paper to choose an appropriate requirements subset, the combination of the NSGA-II method and a genetic algorithm to obtain three fitness functions. The results of the proposed methods have been compared to other methods based on the evaluation criteria in this field. The experiments show the efficiency of the proposed methods to select efficient and useful requirements.

🎢 Welcome to the wild ride of "Genetic Programming" for dummies, where random code #generation and mind-numbing node traversal are the new extreme sports! 🤯 Watch in awe as we reinvent the wheel and call it a "toy project," all while praying our RANDOMELT doesn't melt our tiny brains. 🤖💥
aerique.blogspot.com/2011/01/b #GeneticProgramming #RandomCode #NodeTraversal #ExtremeSports #ToyProject #HackerNews #ngated

aerique.blogspot.comCeci n'est pas un titre: Baby Steps into Genetic Programming

Now that you've finished CEC revisions... and finalising EuroGP camera-ready.. and you have GECCO acceptance decisions... and you've finished GECCO workshop submissions...

...keep up the momentum to get your paper ready for a GPEM submission!

"Constraining genetic symbolic regression via semantic backpropagation" by Reissman et al in GPEM

#geneticprogramming

link.springer.com/article/10.1

SpringerLinkConstraining genetic symbolic regression via semantic backpropagation - Genetic Programming and Evolvable MachinesEvolutionary symbolic regression approaches are powerful tools that can approximate an explicit mapping between input features and observation for various problems. However, ensuring that explored expressions maintain consistency with domain-specific constraints remains a crucial challenge. While neural networks are able to employ additional information like conservation laws to achieve more appropriate and robust approximations, the potential remains unrealized within genetic algorithms. This disparity is rooted in the inherent discrete randomness of recombining and mutating to generate new mapping expressions, making it challenging to maintain and preserve inferred constraints or restrictions in the course of the exploration. To address this limitation, we propose an approach centered on semantic backpropagation incorporated into the Gene Expression Programming (GEP), which integrates domain-specific properties in a vector representation as corrective feedback during the evolutionary process. By creating backward rules akin to algorithmic differentiation and leveraging pre-computed subsolutions, the mechanism allows the enforcement of any constraint within an expression tree by determining the misalignment and propagating desired changes back. To illustrate the effectiveness of constraining GEP through semantic backpropagation, we take the constraint of physical dimension as an example. This framework is applied to discover physical equations from the Feynman lectures. Results have shown not only an increased likelihood of recovering the original equation but also notable robustness in the presence of noisy data.