diff doc/interpreter/diagperm.txi @ 18329:200851c87444 stable

Edits to Manual and indicies
author Michael Godfrey <michaeldgodfrey@gmail.com>
date Sat, 04 Jan 2014 14:37:59 -0500
parents d63878346099
children 9ac2357f19bc 446c46af4b42
line wrap: on
line diff
--- a/doc/interpreter/diagperm.txi
+++ b/doc/interpreter/diagperm.txi
@@ -18,6 +18,8 @@
 
 @node Diagonal and Permutation Matrices
 @chapter Diagonal and Permutation Matrices
+@cindex diagonal and permutation matrices
+@cindex matrices, diagonal and permutation
 
 @menu
 * Basic Usage::          Creation and Manipulation of Diagonal/Permutation Matrices
@@ -224,6 +226,7 @@
 
 @node Expressions Involving Diagonal Matrices
 @subsection Expressions Involving Diagonal Matrices
+@cindex diagonal matrix expressions
 
 Assume @var{D} is a diagonal matrix.  If @var{M} is a full matrix,
 then @code{D*M} will scale the rows of @var{M}.  That means,
@@ -260,6 +263,7 @@
 i.e., null rows are appended to the result.
 The situation for right-multiplication @code{M*D} is analogous.
 
+@cindex pseudoinverse
 The expressions @code{D \ M} and @code{M / D} perform inverse scaling.
 They are equivalent to solving a diagonal (or rectangular diagonal)
 in a least-squares minimum-norm sense.  In exact arithmetic, this is
@@ -270,12 +274,12 @@
 The matrix division algorithms do, in fact, use division rather than 
 multiplication by reciprocals for better numerical accuracy; otherwise, they
 honor the above definition.  Note that a diagonal matrix is never truncated due
-to ill-conditioning; otherwise, it would not be much useful for scaling.  This
+to ill-conditioning; otherwise, it would not be of much use for scaling.  This
 is typically consistent with linear algebra needs.  A full matrix that only
-happens to be diagonal (an is thus not a special object) is of course treated
+happens to be diagonal (and is thus not a special object) is of course treated
 normally.
 
-Multiplication and division by diagonal matrices works efficiently also when
+Multiplication and division by diagonal matrices work efficiently also when
 combined with sparse matrices, i.e., @code{D*S}, where @var{D} is a diagonal
 matrix and @var{S} is a sparse matrix scales the rows of the sparse matrix and
 returns a sparse matrix.  The expressions @code{S*D}, @code{D\S}, @code{S/D}
@@ -399,6 +403,8 @@
 
 @node Permutation Matrix Functions
 @subsection Permutation Matrix Functions
+@cindex matrix, permutation functions
+@cindex permutation matrix functions
 
 @dfn{inv} and @dfn{pinv} will invert a permutation matrix, preserving its
 specialness.  @dfn{det} can be applied to a permutation matrix, efficiently
@@ -455,7 +461,7 @@
 @end example
 
 @noindent
-Finally, here's how you solve a linear system @code{A*x = b}
+Finally, here's how to solve a linear system @code{A*x = b}
 with Tikhonov regularization (ridge regression) using SVD (a skeleton only):
 
 @example
@@ -477,16 +483,17 @@
 
 @node Zeros Treatment
 @section Differences in Treatment of Zero Elements
+@cindex matrix, zero elements
 
 Making diagonal and permutation matrices special matrix objects in their own
 right and the consequent usage of smarter algorithms for certain operations
 implies, as a side effect, small differences in treating zeros.
-The contents of this section applies also to sparse matrices, discussed in
-the following chapter.
+The contents of this section apply also to sparse matrices, discussed in
+the following chapter. (@pxref{Sparse Matrices})
 
-The IEEE standard defines the result of the expressions @code{0*Inf} and 
-@code{0*NaN} as @code{NaN}, as it has been generally agreed that this is the
-best compromise.
+The IEEE floating point standard defines the result of the expressions @code{0*Inf} and 
+@code{0*NaN} as @code{NaN}. This is widely agreed to be a good
+compromise.
 Numerical software dealing with structured and sparse matrices (including
 Octave) however, almost always makes a distinction between a "numerical zero"
 and an "assumed zero".