#problemsquad
Explore tagged Tumblr posts
Photo

#problems #problemsolved #problemsolver #problemsolving #problemsolvers #problemskin #problemsolve #problemssolved #problemsolution #problemsgone #problemsloved #problemsleuth #problemsinparadise #problemsquad #problemss #problemsout #problemseverywhere #problemsofalushie #problemsoff #problemstown #problemsoflife #problemsatwork #problemsss #problemsr #problemsaretemporary #problems101 #problemsaway #problemsandsolutions #problemslover #problemsfordays (at Hillary's of Houston)
#problemsinparadise#problemsaway#problemsoflife#problemsoff#problemsfordays#problemsleuth#problemsaretemporary#problemsandsolutions#problemsolution#problemsss#problemsatwork#problemsloved#problemsr#problemsout#problemsquad#problemsolver#problemsolvers#problemsofalushie#problemskin#problemsolved#problems#problemsolve#problemstown#problemsolving#problemssolved#problemsgone#problems101#problemseverywhere#problemslover#problemss
0 notes
Text
Problem Squad Problem #3
Let $\alpha$ be a negative number and let $A$ be the $10^6\times10^6$ matrix whose $5\times5$ cousin is this: $$A = \begin{bmatrix} 1^\alpha & 2^\alpha & 6^\alpha & 7^\alpha & 15^\alpha \\\\ 3^\alpha & 5^\alpha & 8^\alpha & 14^\alpha & 16^\alpha \\\\ 4^\alpha & 9^\alpha & 13^\alpha & 17^\alpha & 22^\alpha \\\\ 10^\alpha & 12^\alpha & 18^\alpha & 21^\alpha & 23^\alpha \\\ 11^\alpha & 19^\alpha & 20^\alpha & 24^\alpha & 25^\alpha \end{bmatrix}$$ If $||A||_2=2$, what is $\alpha$?
This problem is reminiscent of Problem 3 of the 100-Digit Challenge, which is to find the $2$-norm of the infinite matrix $B$ with $b_{11}=1$, $b_{12}=\frac12$, $b_{21}=\frac13$, $b_{13}=\frac14$, $b_{22}=\frac15$, etc. (The matrix $B$ is the same as the above matrix $A$ with the exception that the natural numbers on it run down from the top row typewriter-style rather than winding up and down lawnmower-style.) The two problems are inverses: the 100-Digit Challenge problem was to calculate the $2$-norm of a structured matrix $B$ with entrywise exponent $-1$, whereas this new one is to calculate the entrywise exponent of a very similar structured matrix $A$ such that the $2$-norm of $A$ is equal to $2$.
The 100DC problem has been solved to high accuracy, and its solution suggests a way to approach this new one. Nick Trefethen's Ten Digit Algorithms includes a MATLAB code operator_norm.m that solves 100DC P#3 to 12 digits of accuracy in less than a second using epsilon extrapolation on the sequence of solutions for the leading principal minors of $A$ of sizes $n=2,4,8,\ldots,1024$.
Winning method: Extrapolate from smaller matrices
The matrix $A$ is far too big and dense and ill-conditioned to work with on a normal computer, but smaller versions of it are not. The entries of $A$ decay away from the $(1,1)$ entry, so we may be able to capture the behaviour of $\alpha$ by computing it for increasingly larger matrices. Below are the results of such a computation on the leading principal minors of $A$ of different sizes $n$:
n$\alpha$ 4$-0.418940425987$ 8$-0.539293628130$ 16$-0.584092406894$ 32$-0.603132120453$ 64$-0.611797710113$ 128$-0.615860842842$ 256$-0.617775990326$ 512$-0.618669432557$ 1024$-0.619078159850$ 2048 $-0.619260673566$
Applying epsilon extrapolation on the above sequence of numbers leads to a best guess of \\[ \alpha \approx -0.61939\ldots. \\]
Edit: Prof Trefethen was able to convince us on Friday that Richardson extrapolation is the proper method to use on the sequence of exponents above; he was also able to compute $\alpha$ for matrices of size up to $n=3200$ by using MATLAB's faster normest in place of the usual norm command, arriving at the estimate \\[ \alpha \approx -0.61941\ldots. \\]
Another method: Approximating the largest eigenvalue
Here is a sketch of another method to estimate $\alpha$. For real symmetric matrices, \\[ ||A||_2 = \sigma_\mathrm{max}(A) = \lambda_\mathrm{max}(A), \\] i.e., the largest eigenvalue is equal to the 2-norm.
Consider $A$ as a sum of its symmetric part $H$ and antisymmetric part $N$, writing \\[ A = \underbrace{\frac12(A+A^*)}_H + \underbrace{\frac12(A-A^*)}_N. \\] Because of the way its entries wind along antidiagonals, the matrix $A$ has small antisymmetric part. For example, in the small case $n~=~1000$, the corresponding entrywise exponent that satistfies the norm condition $||A_n||_2~=~2$ is approximately $\alpha_n~=~-0.619$. In this case, we find that $||H_n||_2~=~1.998$ while $||N_n||_2~=~0.098$.
Therefore $\lambda_\mathrm{max}(A_n)$ will be approximately equal to $||A_n||_2$, and the exponent $\alpha_\lambda$ such that $\lambda_\mathrm{max}(A_n)~=~2$ is an estimate of the solution to our problem.
Besides the difficulty that $\alpha_\lambda$ is still nontrivial to estimate for large matrices, we are faced with the fact that this method will not offer more than perhaps two digits of $\alpha$ since $||N||_2$ does not shrink as $n$ grows, and so the error in our estimate will remain large.
Another method: For the Frobenius norm
Another approach, suggested by Mohsin Javed, is to bound the $2$-norm of $A$ by its Frobenius norm and then study distribution of the singular values of $A$: \\[ 4 = ||A||_2^2 = \sigma_{max}^2 (A) \leq \sum_{k=1}^n \sigma_k^2 = ||A||_F^2. \\] In particular, for our special $A$, we find the interesting bound \\[ 4 = ||A||_2^2 \leq ||A||_F^2 = \sum_{k=1}^{10^{12}} k^{2\alpha}. \\] This idea did not lead us to much with regard to the quest for $\alpha$, but we did make the interesting discovery that we can solve the same problem for the Frobenius norm with a simple query to Wolfram|Alpha, namely
sum from k=1 to 10^12 of k^(2*x) = 4
which returns \\[ \alpha_F=-0.6469367572\ldots. \\] The above sum resembles a trunction of the zeta function, but it turns out that solving $\zeta(2x)=4$ yields an $x$ that agrees to only four digits with $\alpha_F$.
0 notes