Haha. Great (nerdy) meme.
Haha. Great (nerdy) meme.
Bandicoot: A Templated C++ Library for GPU Linear Algebra
Bilinear forms implies the existence of alinear forms, heterolinear forms, homolinear forms, panlinear forms, and polylinear forms. All of which are generalized by queerlinear forms.
#LinearAlgebra #BilinearForm #QueerlinearForm
Logistic regression may be used for classification.
In order to preserve the convex nature for the loss function, a log-loss cost function has been designed for logistic regression. This cost function extremes at labels True and False.
The gradient for the loss function of logistic regression comes out to have the same form of terms as the gradient for the Least Squared Error.
More: https://www.baeldung.com/cs/gradient-descent-logistic-regression
`His initial intended uses were for linguistic analysis and other mathematical subjects like card shuffling, but both Markov chains and matrices rapidly found use in other fields.`
These are clever people: Quantum Scientists Building New Math of Cryptography
https://www.quantamagazine.org/quantum-scientists-have-built-a-new-math-of-cryptography-20250725/
One-way function
https://en.wikipedia.org/wiki/One-way_function
Quantum cryptography
https://en.wikipedia.org/wiki/Quantum_cryptography
Permanent (mathematics)
https://en.wikipedia.org/wiki/Permanent_(mathematics)
♯P-completeness of 01-permanent
https://en.wikipedia.org/wiki/%E2%99%AFP-completeness_of_01-permanent
“I Don’t Like NumPy”, ‘Dynomight’ (https://dynomight.net/numpy/).
... that's nothing new. the point was to address a related question: suppose that the eigensystem {v_i, λ_i}, i = 1, ..., n of a full-rank, well-conditioned n-by-n square matrix A is known, and then you are given a related matrix B = A + E, where E represents some type of random noise. Can a relationship between E and c be derived, such that the eigensystem of A also satisfies f( B v_i - λ_i v_i ) <= c, for all i and some f?
In this groundbreaking revelation, the author stretches the very fabric of reality by turning boring old functions into thrilling "infinite-dimensional vectors". Because who doesn't want to apply linear algebra to every mundane aspect of life?
Required reading: everything you've ever learned about math, ever.
https://thenumb.at/Functions-are-Vectors/ #linearalgebra #mathrevolution #infinitedimensionalvectors #thrillingmath #hackersnews #HackerNews #ngated
This is called "A Gentle Introduction to the Hessian Matrix"
Hessians are somewhere between #linearalgebra #calculus and #rstats but still a core aspect of #datascience
All in all, building and deriving things like these are probably only useful when developing a unique solution. For the vast majority of cases, having a general understanding is enough.
... actually, I am pretty sure that there is a #python library for just such an occasion (I have never looked though so ymmv)
Okay. After that bit of hilarity yesterday, have some stuff on #linearalgebra
Not a formula sheet but still useful for developing your #datascience intuition
Here's a question: let \(M\) be a \(0\times 0\) matrix with entries in the field \(\mathbb{F}\). What is \(\det(M)\)?
That first implementation didn't even support the multi-GPU and multi-node features of #GPUSPH (could only run on a single GPU), but it paved the way for the full version, that took advantage of the whole infrastructure of GPUSPH in multiple ways.
First of all, we didn't have to worry about how to encode the matrix and its sparseness, because we could compute the coefficients on the fly, and operate with the same neighbors list transversal logic that was used in the rest of the code; this allowed us to minimize memory use and increase code reuse.
Secondly, we gained control on the accuracy of intermediate operations, allowing us to use compensating sums wherever needed.
Thirdly, we could leverage the multi-GPU and multi-node capabilities already present in GPUSPH to distribute computations across all available devices.
And last but not least, we actually found ways to improve the classic #CG and #BiCGSTAB linear solving algorithms to achieve excellent accuracy and convergence even without preconditioners, while making the algorithms themselves more parallel-friendly:
https://doi.org/10.1016/j.jcp.2022.111413
4/n
#datascience cheatsheets for #python #probability #linearalgebra #calculus and #scipy
(Not necessarily in that order)
My latest article delves into vector rotations as a specialized class of linear transformations, addressing their theoretical underpinnings in 2D and 3D. We examine classical rotation matrices, Rodrigues' formula, and their critical role in #GameWorldModeling and real-time systems, particularly concerning computational precision.
The dot product is essential in game dev and simulations—from FOV checks to coordinate transformations. This intro article covers the geometric intuition, key lemmas, and a practical example for visibility modeling without ray tracing. https://thorsten.suckow-homberg.de/docs/articles/computer-graphics/the-geometry-of-the-dot-product
#GameDev #Math #ComputerGraphics #LinearAlgebra
`Although the term "matrix" was introduced into mathematical literature by James Joseph Sylvester in 1850, the credit for founding the theory of matrices must be given to Arthur Cayley, since he published the first expository articles on the subject. ... Cayley's introductory paper in matrix theory was written in French and published in a German periodical [in 1855]`
`Cardan, in Ars Magna (1545), gives a rule for solving a system of two linear equations which he calls regula de modo and which [7] calls mother of rules ! This rule gives what essentially is Cramer's rule for solving a 2 × 2 system although Cardan does not make the final step. Cardan therefore does not reach the definition of a determinant but, with the advantage of hindsight, we can see that his method does lead to the definition.`
https://mathshistory.st-andrews.ac.uk/HistTopics/Matrices_and_determinants/
Thanks to the Manchester NA group for organizing a seminar by David Watkins, one of the foremost experts on matrix eigenvalue algorithms. I find numerical linear algebra talks often too technical, but I could follow David's talk quite well even though I did not get everything, so thanks for that.
David spoke about the standard eigenvalue algorithm, which is normally called the QR-algorithm. He does not like that name because the QR-decomposition is not actually important in practice and he calls it the Francis algorithm (after John Francis, who developed it). It is better to think of the algorithm as an iterative process which reduces the matrix to triangular form in the limit.