Subject:

In this lab, you will see some applications of the SVD, including The right singular vectors corresponding to vanishing singular values of span the nullspace of , The right singular vectors corresponding to positive singular values of span the domain of.

The left singular vectors corresponding to positive singular values of span the range of. The rank of is the number of positive singular values of.

This fact can be used as a compression algorithm for images. Numerical methods for finding the singular value decomposition will also be addressed in this lab. This function uses the Lapack subroutine dgesvd , so if you were to need it in a Fortran or C program, it would be available by linking against the Lapack library. You may be interested in a six-minute video made by Cleve Moler in at what was then known as the Los Alamos Scientific Laboratory. Most of the film is computer animated graphics.

You may find it convenient to print the pdf version of this lab rather than the web page itself. The singular value decomposition The matrix in 1 is and is of the form.

## Select a Web Site

Supposing that is the largest integer so that , then the SVD implies that the rank of the matrix is. Similarly, if the matrix is , then the rank of the nullspace of is.

The first columns of are an orthonormal basis of the range of , and the last columns of are an orthonormal basis for the nullspace of. Of course, in the presence of roundoff some judgement must be made as to what constitutes a zero singular value, but the SVD is the best if expensive way to discover the rank and nullity of a matrix. Assuming that the matrix is non-singular, all singular values are strictly positive, and the SVD can be used to solve a system.

Exercise 1 : In this exercise you will use the Matlab svd function to solve for the best fit linear function of several variables through a set of points. This can be written as a rectangular system. The system 3 can be written in matrix form.

## Related Topics

Recall that the values are the unknowns. For a moment, denote the vector , the matrix and the vector , all for and , where. With this notation, 3 can be written as the matrix equation , where is usually a rectangular matrix. To see why this is so, suppose that the rank of is , so that there are precisely positive singular values of. Then and.

As we have done several times before, we will first generate a data set with known solution and then use svd to recover the known solution.

## MATLAB Video 11: min and max functions

Place Matlab code for the following steps into a script m-file called exer1. Sometimes the data you are given turns out to be deficient because the variables supposed to be independent are actually related.

This dependency will cause the coefficient matrix to be singular or nearly singular. When the matrix is singular, the system of equations is actually redundant, and one equation can be eliminated.

Dealing with dependencies of this sort is one of the greatest difficulties in using least-squares fitting methods because it is hard to know which of the equations is the best one to elminate. The singular value decomposition is the best way to deal with dependencies. In the following exercise you will construct a deficient set of data and see how to use the singular value decomposition to find the solution. Exercise 2 : Copy your m-file exer1. You should see that S 4,4 is substantially smaller than the others.

Set S 4,4 to zero, construct according to. Look at V :,4. This is the vector associated with the singular value that you set to zero.

Since the original system is rank deficient, there must be a non-trivial nullspace. Any solution can be represented as the sum of some particular solution and a vector from the nullspace and the vector V :,4 spans the nullspace. What multiple of V :,4 can be added to your solution to yield [4;-3;2;-1]? Note that the singular value decomposition allows you to discover deficiencies in the data without knowing they are there or even if there are any in the first place.

This idea can be the basis of a fail-safe method for computing least-squares coefficients. This approach can form the basis of efficient compression methods. Download it.

## Post navigation

Matlab provides various image processing utilities. In this case, read the image in using the following command.

Display the color plot using the command image nasacolor The third subscript of the array nasa refers to the red, green, and blue color components. Remark : This is a cheap but dirty way to create a grayscale image from an rgb image. It is good enough for our purpose here.

Use the command colormap gray ; to set a grayscale colormap. Now you can show the picture with the command image nasa title 'Grayscale NASA photo' ; Please do not send me this plot.

## Advances in Fuzzy Systems

Do not include these matrices with your summary! Plot the singular values on a semilog scale: semilogy diag S. Please send me this plot with your summary. The nasa matrix is and requires , numbers to store it. In contrast, nasa50 requires subsets of and that are and the diagonal of , for a total of 56, This is better than four to one savings in space.

Plot the three matrices nasa , nasa50 and nasa25 as images, including descriptive titles. You should only very slight differences between the original and the one with singular values, some noticable differences with 50 singular values while you should see serious degradation of the image in the case of 25 singular values. Please include the 25 singular value case with your summary. Two numerical algorithms We now turn to the question of how to compute the SVD.

The easiest algorithm for SVD is to note the relationship between it and the eigenvalue decomposition: singular values are the square roots of the eigenvalues of or.

Indeed, if , then. Unfortunately, this is not a good algorithm because forming the product roughly squares the condition number, so that the eigenvalue solution is not likely to be accurate.

## MATLAB - Functions

Of course, it will work fine for small matrices with small condition numbers and you can find this algorithm presented in many web pages. Don't believe everything you see on the internet. The algorithm is a one-sided Jacobi iterative algorithm that appears as Algorithm 4. This algorithm amounts to the Jacobi algorithm for finding eigenvalues of a symmetric matrix. See, for example, J.

## Description

Golub, C. A Jacobi rotation is a matrix rotation that annihilates the off-diagonal term of a symmetric matrix. Given a matrix with.

The following algorithm repeatedly passes down the diagonal of the implicitly-constructed matrix , choosing pairs of indices , constructing the submatrix from the intersection of rows and columns, and using Jacobi rotations to annhilate the off-diagonal term. Unfortunately, the Jacobi rotation for the pair , messes up the rotation from , , so the process must be repeated until convergence.

At convergence, the matrix of the SVD has been implicitly generated, and the right and left singular vectors are recovered by multiplying all the Jacobi rotations together. The following algorithm carries out this process. Note: For the purpose of this lab, that matrix will be assumed to be square.

## Max min composition matlab tutorial pdf

Rectangular matrices introduce too much algorithmic complexity. Algorithm Given a convergence criterion , a square matrix , a matrix , that starts out as and ends up as a matrix whose columns are left singular vectors, and another matrix that starts out as the identity matrix and ends up as a matrix whose columns are right singular vectors.

As mentioned above, the columns of are the computed right singular vectors. Complete the code so that it implements the above algorithm. Your algorithm should essentially reproduce the matrices U , S and V. You might find that the diagonal entries of S are not in order, so that the U and V are similarly permuted, or you might observe that certain columns of U or V have been multiplied by Be sure, however, that an even number of factors of -1 have been introduced.

If you do not get the right answers, you can debug your code in the following way. Now, try running the code once again. You will find the code has stopped at the breakpoint. Take out a piece of paper and calculate the value of alpha. There are only two terms in this sum. Did your code value agree with your hand calculation? If not, double-check both calculations and fix the one that is in error.

Do the same for beta and gamma. Similarly, check the values of s and c. Do you get A back? If not, you probably have something wrong in the two statements defining V , or, perhaps, you mis-copied the code updating U from the web page. Find the error. Remark: At this point in the iteration, U and V have not yet converged to their final values! In addition, print U1 , S1 and V1 using format long and visually compare them with the singular values you get from the Matlab svd function.

You also will note that the values do not come out sorted, as they do from svd. It is a simple matter to sort them, but then you would have to permute the columns of U1 and V1 to match.