Last chance to join OpenUK's London meet-up in person on 23 April and explore ""WebAssembly ecosystem AKA WASM""!
With Bailey Hayes and Graziano Casto, hosted by Jennifer Riggins and Fergus Kidd
Find out more on joining here: https://openuk.uk/event-calendar/openuklondon23savethedate/ Thanks to our sponsor Avanade:
#opensource #opensourcesoftware #openuk #WASM
Microsecond transforms: Building a fast sandbox for user code https://lobste.rs/s/xgw2d9 #api #elixir #lua #virtualization #wasm
https://blog.sequinstream.com/microsecond-transforms-building-a-lightning-fast-sandbox-for-user-code/
Microsecond transforms: Building a fast sandbox for user code
WebAssembly голыми руками
WebAssembly являясь (относительно) молодой технологией уже довольно распространён в индустрии . Тем не менее, почти все материалы в сети по теме рассматривают WASM как цель для компиляции других более высокоуровневых языков. Информации же по работе с самим WebAssembly и написанию кода непосредственно на нем в сети крайне мало, а в рунете и подавно, что я и попробую исправить под катом.
Oh, there is even an article about implementing bitmask on ARM: https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/webassembly-bitmask-operations#mcetoc_1hcv09ro14
Looks like a jumper through a bunch of hoops since it is not natively supported. I wonder if I can work around it somehow… My use case is to ctz right after getting the bitmask, and the full vector is just a bunch of booleans... maybe I should pack the vector differently somehow.
This can't be V8's bitmask implementation, can it be? #wasm
The generated arm code looks alright, though...
I spent so much time trying different permutations and chains of wasm SIMD ops to speed up the performance of the parser only to find out that there are no significant gains on my ARM64 laptop, but there are on Intel.
I guess either ARM64 is pretty fast at non-vectorized path, or V8 is just not emitting the best instructions for it.
О чем на этот раз будет Python Day на Positive Hack Days
24 мая в рамках
Want to learn about hardware coding using Go, but don't have any actual gear? We got you covered!
Check out the TinyGo tour: https://tinygo.org/tour/
I've created my first GUI in Rust. I made a wrapper for #paperage, which is an application for saving encrypted secrets on paper.
I had to tweak paper-age a bit, but now you can use it in the browser using #WASM.
It's still a bit messy, but it should work. And of course, there's #nix support.
You can find it on GitHub here: https://github.com/renesat/paper-age-gui
P.S. Thanks for the idea, @iuvi. You can start testing
Generates random 16 bytes, transforms it to CryptoKey Object, encrypts it for RSA-key issuer
Compilers for free with weval
Author: Max Bernstein
Original: https://bernsteinbear.com/blog/weval/
wgpu: A cross-platform, safe, pure-Rust graphics API.
「 wgpu is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.
The API is based on the WebGPU standard. It serves as the core of the WebGPU integration in Firefox, Servo, and Deno 」
@tuban_muzuru An alternative to egui would be a frontend framework like Yew or Dioxus, which would let you interact with the DOM.
However, as long as you don't want accessibility support, I think egui is pretty stable and a good option for more app-based websites.
Edit: Yew is dead so try another one here: https://github.com/flosse/rust-web-framework-comparison
Some notes for followers: The bot has been switched to run on the #Wasm platform and is still written in Rust. It's licensed under AGPL v3 (code link in profile). You can now see alt texts on images generated by the Google Gemini AI API. Since the Google API costs money, I might switch to something cheaper or even use my own LLM. @seungjin
Some notes for followers: The bot has been switched to run on the #Wasm platform and is still written in Rust. It's licensed under AGPL v3 (code link in profile). You can now see alt texts on images generated by the Google Gemini AI API. Since the Google API costs money, I might switch to something cheaper or even use my own LLM. @seungjin