fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

8.6K
active users

#linearalgebra

2 posts2 participants0 posts today
Replied in thread

Logistic regression may be used for classification.

In order to preserve the convex nature for the loss function, a log-loss cost function has been designed for logistic regression. This cost function extremes at labels True and False.

The gradient for the loss function of logistic regression comes out to have the same form of terms as the gradient for the Least Squared Error.

More: baeldung.com/cs/gradient-desce

Continued thread

... that's nothing new. the point was to address a related question: suppose that the eigensystem {v_i, λ_i}, i = 1, ..., n of a full-rank, well-conditioned n-by-n square matrix A is known, and then you are given a related matrix B = A + E, where E represents some type of random noise. Can a relationship between E and c be derived, such that the eigensystem of A also satisfies f( B v_i - λ_i v_i ) <= c, for all i and some f?

In this groundbreaking revelation, the author stretches the very fabric of reality by turning boring old functions into thrilling "infinite-dimensional vectors". 🧐 Because who doesn't want to apply linear algebra to every mundane aspect of life? 🤓🎉 Required reading: everything you've ever learned about math, ever.
thenumb.at/Functions-are-Vecto #linearalgebra #mathrevolution #infinitedimensionalvectors #thrillingmath #hackersnews #HackerNews #ngated

thenumb.atFunctions are Vectors
More from TheNumbat

This is called "A Gentle Introduction to the Hessian Matrix"

Hessians are somewhere between #linearalgebra #calculus and #rstats but still a core aspect of #datascience

All in all, building and deriving things like these are probably only useful when developing a unique solution. For the vast majority of cases, having a general understanding is enough.

... actually, I am pretty sure that there is a #python library for just such an occasion (I have never looked though so ymmv)

Continued thread

That first implementation didn't even support the multi-GPU and multi-node features of #GPUSPH (could only run on a single GPU), but it paved the way for the full version, that took advantage of the whole infrastructure of GPUSPH in multiple ways.

First of all, we didn't have to worry about how to encode the matrix and its sparseness, because we could compute the coefficients on the fly, and operate with the same neighbors list transversal logic that was used in the rest of the code; this allowed us to minimize memory use and increase code reuse.

Secondly, we gained control on the accuracy of intermediate operations, allowing us to use compensating sums wherever needed.

Thirdly, we could leverage the multi-GPU and multi-node capabilities already present in GPUSPH to distribute computations across all available devices.

And last but not least, we actually found ways to improve the classic #CG and #BiCGSTAB linear solving algorithms to achieve excellent accuracy and convergence even without preconditioners, while making the algorithms themselves more parallel-friendly:

doi.org/10.1016/j.jcp.2022.111

4/n

My latest article delves into vector rotations as a specialized class of linear transformations, addressing their theoretical underpinnings in 2D and 3D. We examine classical rotation matrices, Rodrigues' formula, and their critical role in #GameWorldModeling and real-time systems, particularly concerning computational precision.

thorsten.suckow-homberg.de/doc

`Although the term "matrix" was introduced into mathematical literature by James Joseph Sylvester in 1850, the credit for founding the theory of matrices must be given to Arthur Cayley, since he published the first expository articles on the subject. ... Cayley's introductory paper in matrix theory was written in French and published in a German periodical [in 1855]`

mathshistory.st-andrews.ac.uk/

Maths HistoryArthur Cayley - BiographyArthur Cayley's most important work was in developing the algebra of matrices and work in non-euclidean and n-dimensional geometry.

`Cardan, in Ars Magna (1545), gives a rule for solving a system of two linear equations which he calls regula de modo and which [7] calls mother of rules ! This rule gives what essentially is Cramer's rule for solving a 2 × 2 system although Cardan does not make the final step. Cardan therefore does not reach the definition of a determinant but, with the advantage of hindsight, we can see that his method does lead to the definition.`

mathshistory.st-andrews.ac.uk/

Maths HistoryMatrices and determinants

Thanks to the Manchester NA group for organizing a seminar by David Watkins, one of the foremost experts on matrix eigenvalue algorithms. I find numerical linear algebra talks often too technical, but I could follow David's talk quite well even though I did not get everything, so thanks for that.

David spoke about the standard eigenvalue algorithm, which is normally called the QR-algorithm. He does not like that name because the QR-decomposition is not actually important in practice and he calls it the Francis algorithm (after John Francis, who developed it). It is better to think of the algorithm as an iterative process which reduces the matrix to triangular form in the limit.