top of page

Linear Algebra and Matrix Analysis (LAMA)

The research group on Linear Algebra and Matrix Analysis is part of the group "Mathematics Applied to Control, Systems and Signals", which is a research group officially recognized by Universidad Carlos III de Madrid. The group on Linear Algebra and Matrix Analysis does research on theoretical, numerical and applied aspects of these disciplines, with especial emphasis in the core theory and perturbation theory of matrix equations, matrix pencils, nonlinear eigenvalue problems, and inverse eigenvalue problems, as well as on their numerical solution. The research of this group has been continuously funded since 2000 by different grants of "Planes Nacionales de Investigación" of the Government of Spain. Currently, it is funded by the grant "New problems on matrices depending on parameters and problems on constant matrices" (Agencia Estatal de Investigación" from June 1, 2020 until May 31, 2024), whose principal investigators are Fernando De Terán and Froilán M.Dopico.

image.png
image.png
image.png

TFM proposal: The Higher Order SVD (HOSVD) and its applications

Tensors are a generalization of matrices, namely, matrices are two dimensional arrays, whereas tensors are n-dimensional arrays (the picture, which is taken from [BK09, p. 458], shows the so-called fibers of a 3 dimensional tensor). Tensors have been used in many different
settings in science and engineering, including computer vision, chemistry, data mining, imaging, signal processing, telecommunications, and text mining (see [KB09]).

 

The Higher order SVD (HOSVD) is a decomposition of tensors that extends the Singular Value Decomposition (SVD) for matrices. It has applications in some of the areas mentioned above, and, more in particular, in face recognition and hand-written digit classification (see [E19] and [ES07]).


In this project, the student is expected to become familiar with the HOSVD and other decompositions on tensors, and to use it in some particular applied setting.

References:


[E19] L. Eldén. Matrix Methods in Data Mining and Pattern Recognition, 2nd ed. SIAM, Philadelphia, 2019.
[ES07] B. Savas and L. Eldén. Handwritten digit classification using higher order singular value decomposition. Pattern Recognit., vol. 40, no. 3, pp. 993-1003, 2007.
[KB09] T. Kolda, B. W. Bader. Tensor Decompositions and Applications. SIAM Rev. 51, no. 3 (2009) 455-500.

Contact: Fernando De Terán (fteran@math.uc3m.es)

 

 

TFM proposal: Computing the dimension of orbits of Hermitian matrix pencils

 

Matrix pencils are pairs of matrices (A,B) of the same size. Equivalently, they are usually seen as matrix polynomials of degree 1, namely objects of the form A+xB, where A and B are matrices of the same size.

 

In particular, Hermitian matrix pencils are a kind of matrix pencils that frequently arise in applications from Control Theory or Mechanical Systems (see, for instance, [MMMM]). They are matrix pencils A+xB where both A and B are Hermitian matrices (namely, A*=A and B*=B, where * denotes the conjugate transpose).

 

The orbit of a Hermitian matrix pencil A+xB is the set of (Hermitian) matrix pencils which are “*-congruent” to A+xB, namely, the set

 

O(A+xB)={P*(A+xB)P : where P is invertible}.

 

This is a differentiable manifold, and obtaining the dimension of this manifold allows one to get an idea on “how big” this set is. An open problem in this context is to describe the “dominance” ordering between orbit closures, namely to be able to say whether one orbit is included in the closure of another orbit or not. This is a problem that has been addressed (and solved) for general pencils and for pencils enjoying other structures (like skew-symmetric, see [KD]), and obtaining the dimension of the orbits of Hermitian pencils would shed light to solve this problem for this particular structured pencils (so this could be a first step for doing a PhD aiming to solve this problem).

 

 In the recent work [DDD] we have obtained the dimension of the “generic” orbits (namely, the biggest orbits) of Hermitian pencils, so obtaining the dimension of the other orbits is a natural continuation of this work.

 

Despite this being apparently a problem from differential geometry (since orbits are differentiable manifolds) the dimension of the orbits can be obtained using linear algebra, namely obtaining the dimension of the solution set of a linear equation.

 

References:


[DDD] F. De Terán, A. Dmytryshyn, and F. M. Dopico. Generic eigenstructures of Hermitian pencils. Submitted.

[DK] A. Dmytryshyn, B. Kagström. Orbit closure hierarchies of skew-symmetric matrix pencils. SIAM Journal on Matrix Analysis and Applications, 35 (2014) 1429-1443.       

[MMMM] D. S. Mackey, N. Mackey, C. Mehl, and V. Mehrmann, Structured polynomial eigenvalue problems: good vibrations from good linearizations. SIAM Journal of Matrix Analysis and Applications, 28 (2006) 1029–1051.  

 

Contact: Fernando de Terán (fteran@math.uc3m.es)                  

 

 

TFG proposal: The mathematics of Deep Learning

 

Deep Learning is a powerful branch of Machine Learning that is a subject of interest by scientists, engineers, and entrepreneurs. It relies on mathematical tools and techniques, some of which are well-know and come from the areas of Linear Algebra or Optimization. There is a large core of scientific literature on the mathematics of Deep Learning. Typing “deep learning” in the standard database of mathematical scientific works (Mathscinet) produces 11.234 outputs (namely, this is the numbers of works containing “deep learning” anywhere in the text), whereas if we restrict ourselves to the appearance of “deep learning” in the title, we obtain 1.551 outputs.

 

The goal of this TFG project is to understand some of the basic mathematical ideas that support Deep Learning. The project will require not only learning and understanding the mathematical background, but also to become familiar with the tools and techniques, and to apply them to particular settings, like creating and testing neural networks. It will also require creating appropriate codes to run the corresponding algorithms.

 

Some of the basic references for this project are [GBC16] and [HH19].

References:

[GBC16] I. Goodfellow, Y. Bengio, A. Courville. Deep Learning.  MIT Press, Cambridge, 2016.

[HH19] C. F. Higham, D. J. Higham. Deep Learning: An introduction for applied mathematicians. SIAM Rev. 61 no. 4 (2019) 860-891.

            

Contact: Fernando De Terán (fteran@math.uc3m.es)

image.png

TFG proposal:  Diagonal scalings of non-negative matrices with relaxed targets for the sum and column sums.

 

The problem of scaling a matrix with nonnegative entries by multiplication on the left and on the right by diagonal nonnegative matrices with the goal of obtaining a matrix with prescribed values for the row and column sums is a classical problem in matrix analysis [4] with multiple modern applications, as, for instance, to optimal transport, to data science, and to accurate algorithms for computing eigenvalues of matrix pencils [1,2,3]. The goal of this TFG is to code and to compare some of the most well-known algorithms for solving this problem numerically and to extend them to the scenario where the prescribed row and column sums do not have fixed values.

 

[1] F.M. Dopico, M.C. Quintana, and P. Van Dooren, Diagonal scalings for the eigenstructure of arbitrary pencils, SIAM J. Matrix Anal. Appl., 43 (2022), pp. 1213-1237.  

 

[2] M. Idel, A Review of Matrix Scaling and Sinkhorn’s Normal Form for Matrices and Positive Maps, arXiv:1609.06349, 2016.

 

[3] G. Peyré and M. Cuturi, Computational optimal transport: with applications to data science, Found. Trends Mach. Learn., 11 (2019), pp. 355–607.

 

[4] U. G. Rothblum and H. Schneider, Scalings of matrices which have prespecified row sums and column sums via optimization, Linear Algebra Appl., 114/115 (1989), pp. 737–764.

 

Contact: Froilán M. Dopico (dopico@math.uc3m.es)

 

TFM proposal:  Extending the Weyr Canonical Form of Matrices to  unstructured and structured Matrix Pencils.

 

The Weyr canonical form of matrices under similarity is equivalent to the Jordan canonical form but shows directly the Weyr characteristic of the eigenvalues of the matrix, instead of the Segré characteristic displayed by the Jordan Canonical form (see [1, Section 3.4]). Though the Weyr form was published in 1885, it has received very little attention compared to the Jordan canonical form. However, very recently, it has been proved that it has considerable advantages with respect to the Jordan form for solving problems related to commuting matrices [2]. The goal of this TFM is to investigate if it is possible to extend the Weyr form to matrix pencils under strict equivalence, as an alternative to the well-known Kronecker canonical form, and also to investigate if it is possible to develop structure-preserving versions of the Weyr form for structured pencils (symmetric, Hermitian etc) under congruence.

 

[1] R. Horn and C.R. Johnson, Matrix Analysis, 2nd Edition, Cambridge University Press, 2013.

 

[2] K.C. O’Meara, J. Clark, and C. I. Vinsonhaler,  Advanced Topics in Linear Algebra: Weaving Matrix Properties through the Weyr Form, Oxford University Press, 2011.

 

Contact: Froilán M. Dopico (dopico@math.uc3m.es)

TFG Proposal: Elliptic-curve cryptography

Public key algorithms are becoming fundamental in modern Cryptography.  In a public key cryptosystem, any person can encrypt a message using the intended receiver's public key, but that encrypted message can only be decrypted with the receiver's private key. This scheme has the advantage  of not having to manually pre-share symmetric keys (a fundamentally  difficult problem) while gaining the higher data throughput advantage of  symmetric-key cryptography. Public key cryptography remains therefore  the most secure protocol (over private key cryptography) because users never need to transmit or reveal their private keys to anyone.

There are several public key systems, based on different underlying mathematical ideas. One of the most unique ones is based on the theory  of elliptic curves, traditionally a major area of research in number theory (elliptic curves were lately used, for example, in Andrew Wiles' proof of  Fermat's Last Theorem). Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other computational  tasks, like encryption, primality testing or integer factorization.  The main reason why elliptic curves defined over finite fields can be used as a basis for public-key cryptosystems is that they provide an  inexhaustible supply of finite abelian groups which, even when large,  are amenable to computation because of their rich algebraic structure.  

 

In this project we aim to explore these ideas and analyze their use in different computational applications.

Contact: Julio Moro (jmoro@math.uc3m.es)

 

TFG Proposal: Nonnegative Matrix Factorization

One of the crucial questions in data analysis is to identify the underlying  structure of a data set, as well as to extract significant information from it.  Several simple and efficient methods to reach these goals can be  collected under the common label of linear dimensionality reduction (LDR) techniques, which are essentially equivalent to low-rank matrix approximation (LRMA). This latter type of approximation seeks to find  either low-rank matrices, or combinations of them, which are close in some  appropriate metric to the matrix associated with the original data. Examples of LDR techniques are principal component analysis (closely related to singular values), independent component analysis, low-rank matrix completion, or sparse component analysis. The reason for the success  of these methods is that they are applicable in a wide range of applications such as recommender systems, model-order reduction and system identification, clustering or image analysis.

Among LRMA techniques, nonnegative matrix factorization (NMF) requires  all matrices in the low-rank approximation to be entrywise nonnegative.  This makes it much easier to interpret them meaningfully, for example when they correspond to nonnegative physical quantities. Applications of NMF include extracting parts of faces (such as eyes, noses, and lips) in a set of facial images, identifying topics in a set of documents, learning hidden 

Markov models, separating audio sources from their mixture, detecting communities in large networks, analyzing medical images, or decomposing gene expression microarrays.

In this project we aim to explore these ideas and to analyze their use in different computational applications. 

Contact: Julio Moro (jmoro@math.uc3m.es)

bottom of page