*Problems problems all day long….. Will my problems work out right or wrong……*

This is going to be a post about the topic of problem solving in mathematics. It will be non-educative in the sense that the readers won’t see any new mathematical technique or theorem in the course of this post. But, after posting a comment over here I was requested by many friends to make a post about it on my blog.

Before I begin providing suggestions about problem solving in mathematics, I would like to spend a few words about the general topic of problem solving. Problem solving is among the favorite exercises of a high school mathematics enthusiast. But not everyone excels equally well in the practice of solving problems. That is to say, there are some students who can solve a problem quickly and neatly, while some students have a lot of difficulty to solve them. This often leads to some students getting demotivated, and hence his or her knack towards the subject of mathematics observes a notable decline. But mathematics is much more than just solving problems. The interested reader is suggested to read more from this page of Prof. T. Tao’s blog.

Okay, now let’s come to the point. Here I am going to present a brief summary of what I have always seen as a helpful habit.

So let me begin with the three suggestions.

**1.Read the question carefully**

— Charles Kettering

A problem thoroughly understood is always fairly simple.

When asked a question, the first thing one must do is to understand the question that is been asked properly.

**2.Re-read and translate the question if required**

Once you understand the question, if you can solve it straightaway then kudos to you. Else, you may like to read the question again and translate it into a form that is easier to understand and gives you a better insight.

**3.Solving for a special case to gain a better insight.**

— George Polya

If you can’t solve a problem, then there is an easier problem you can solve: find it.

It’s often helpful to solve the problem for a special case to gain a better insight of the more generic problem. A very famous example is * Cantor’s Diagonal theorem* which says that

But, lastly I must admit that specific types of problems often require specific types of treatment to be solved.

In some later post I may write more on specific categories of problem solving in mathematics.

]]>A very important role in additive combinatorics is played by the following theorem. If is a finite subset of , or indeed of any Abelian group, and if , then for any non-negative integers and we have the estimate This is a theorem of Pl?nnecke that was rediscovered by…]]>

A very important role in additive combinatorics is played by the following theorem. If $latex A$ is a finite subset of $latex mathbb{Z}$, or indeed of any Abelian group, and if $latex |A+A|leq C|A|$, then for any non-negative integers $latex k$ and $latex l$ we have the estimate $latex |kA-lA|leq C^{k+l}|A|.$ This is a theorem of Plnnecke that was rediscovered by Ruzsa. The proof had a structure of the following kind: define a type of graph, now known as a *Plnnecke graph*, and a notion of “magnification ratio”; formulate an inequality concerning Plnnecke graphs; using Menger’s theorem, prove the inequality for graphs with magnification ratio 1 (Menger’s theorem being used to find a collection of disjoint paths, the existence of which is a trivially sufficient condition for the inequality to hold in this case); using the tensor product trick and the construction of Plnnecke graphs that approximately have certain…

View original post 1,303 more words

**Fractal geometry is not just a chapter of mathematics, but one that helps Everyman to see the same world differently.**

We often naively see dimension of a space as the minimum number of information we need to provide to specify any point in uniquely. For example one can think of specifying a point in a space that has only one point. So we need to provide no information and hence a point gets a dimension . Then, to specify a point on a line we need to provide just one coordinate. So a line gets a dimension . Now comes the interesting part — fractional dimensions. One of the first things (or rather a doubt) that can come across our mind when we hear the term “fractional dimensions” (especially after reading the naive introduction we presented here) is how can a fractional number of information be provided for specifying something. But what can happen is — there is some minimum number among integers of coordinates we need to specify, but less than coordinates may be good enough (in some sense) due to self similarity. Let us see an example to understand what we are saying here.

Consider the Cantor Set for instance. How can we specify a point in this space ? One way is to mention the coordinate of the point. Since this space is a subset of the straight line so mentioning one coordinate is good enough. Again since this space has more than a single point so specifying no information will not allow us to specify the points uniquely. So is sort of a minimum threshold on the number of information (among integers) we need to provide. But again it is self similar because it is equal to two copies of itself, if each copy is shrunk by a factor of 3 and translated. So it seems (in some sense) that specifying less than number of information can be good enough. This motivates us to think about some fractional dimension playing under the ground (and actually this was one of the examples and reasons that motivated the study of fractional dimensions).

In loose words, we call such spaces with fractional dimensions **fractals.**

And all these loose statements about dimensions can be made rigorous by bringing Hausdorff dimensions.

One can play with fractals and create a wide variety of patterns for creating beautiful shapes. For example the Koch snowflake can be created starting from an equilateral triangle and erecting equilateral triangles on the middle third part of each side of the shape we had at the previous stage.

Here is a X-mas tree created using fractals.

]]>“Tell me, my friend, what you desire,” said the bartender, a short, beaming man. “You seem sad. May I be permitted to know the reason?”

“It’s no secret,” said the logician, heaving a sigh. “I lost three consecutive gamblings at a casino. I’m broke! I lost my job today due to recession. My girlfriend rejected my proposal for marriage. There’s no hope left.”

“May I then suggest you the unique beverage so special to this bar? Trust me, a glass of that will help you to focus, although generally, one never says this about alcohol. I however take immense pride in my creation, which has induced thoughts, no matter to what degree they ramble initially, to converge ultimately, in all positivity.”

“If you say so,” sighed the logician, “although I find it a bit strange that you have no other customer. You sure I’m not getting killed tonight?”

The bartender laughed, “My place isn’t something for the common bloke. It’s for people like you. People who have some status to be here. Then comes the alcohol. Not everyone likes it. But if you do like it, you’ll love it. You’ll never be able to get out of its grips, ever. That’s for sure.”

The logician took a sip.

“Tell me, dear friend, about your ambitions.”

“I am ambitious, but I have finite ambitions. I do not dwell on the imaginary world.”

“You wouldn’t have your job then! It’s always wise to expect something what has already happened in the past, don’t you think?”

“Precisely. But how long are you going to wait for all past events to show up?”

“Times change. It’s illogical to believe that what happened forty years ago will recur. But, it’s not illogical to base your expectation on your immediate past. Indeed, given your memory has not failed you, your expectations for today should precisely be equal to what you just observed in the immediate past!”

“Quite true. I think I get what you are trying to say.”

“Of course, now you will have to admit. Originally your ambition and thoughts weren’t focussed. Thanks to my drink, now you have an inkling of what you should hope for. Remember, excess hope can doom you. But hope too little and you wouldn’t be amazing.”

Suddenly, the logician jumped up. “Thanks for everything, good sir! You don’t know how helpful you have been in a development related to my career. I thank you with all my heart.”

He proceeded to run out, barely able to control his happiness, when the bartender said out, “I just couldn’t find a name for this new drink I’ve created. Seeing you’ve been effected, could you suggest a name, please?”

The logician smiled. “Of course, I know what its name should be!” “

Can you predict the name suggested by the logician?

PS: This tale (rather puzzle) is not crafted by me. But the name of the original author is withheld……

]]>We (referred to those who already know the basics about the various modes of convergence of random variables) know that every sequence of random variables that converge in probability to a random variable has a sub sequence which converges almost surely to .

Now, let be a sequence of random variables which converge in probability to some random variable . Then we know that every sub sequence of will converge in probability to (WHY ? Prove it as an exercise). Thus every sub sequence of will have a further sub sequence that converge almost surely to . So, is a sequence of random variables every sub sequence of which has a further sub sequence converging almost surely to .

Now, recalling a result from convergence of real numbers we know that a sequence of reals converge to some real number if and only if every sub sequence of has a further sub sequence converging to . We may get tempted to use the same conclusion here with random variables. But this is wrong. To be precise, this result doesn’t hold true for the almost sure mode of convergence of random variables.

Instead of justifying why this goes wrong for the almost sure mode of convergence, let us see what would have happened if it was true.

This would have implied that converges almost surely to .

So in short, we would have got **c****onvergence in probability almost sure convergence** and we know that this is wrong. Almost sure convergence is a much stronger condition than convergence in probability.

**Exercise : **Show that a sequence of random variables converge in probability to if and only if every sub sequence of has a further sub sequence that converges almost surely to . (That is, in other words the result we mentioned about convergence of sequences of real numbers hold true for the convergence in probability of random variables).

Usually -rich lines are also called *ordinary lines.*

Then the simplest version of Sylvester-Gallai theorem says that

Given any finite set of points on the plane such that not all are collinear, then there must be a -rich line.

It was proposed as an exercise problem by Sylvester in 1893 and first proved by Gallai in 1944.

The following sketch is popularly knows as Kelly’s proof of the Sylvester Gallai problem (named after Leroy Milton Kelly).

We would not mention a proof in words (as doing would only make the proof look mundane and not help you appreciate the beauty of the proof). Instead we would mention a wordless proof by providing a simple drawing that speaks the proof itself.

(Look for the shortest distance between a point in and a line that passes through two or more points in ).

(picture taken from AMS blog).

According to prof. Terence Tao : this proof is definitely beautiful but is too clever to extend it to prove stronger and / or similar results.

**Over other fields**

But this result is not true in general if the reals are replaced by other fields. A trivial counterexample are the -adic fields () and we can take the entire set. A less trivial counterexample can be created using the the complex field as mentioned by prof. Terence Tao (in a lecture on topics related to incidence combinatorics).

So interestingly enough this shows some salient feature of the real numbers (although it remains disguised in the above mentioned proof), and this motivates a study of the problem more closely.

Other similar results are related to the mathematical topic “incidence combinatorics” and “algebraic combinatorics”.

]]>

]]>Let be an odd positive integer. Suppose . Let be a symmetric matrix such that all the entries in are from the set in a way such that every column and every row has all the entries of present in it. Show that the main diagonal of also has all the entries in it.

(Proposed by our Mathematics Scouts team).

I found it so pleasing that I want to share it here with all the readers.

Here we first describe the game of double chess. In double chess all the rules of ordinary chess are preserved except that at each move the players have two moves.

Show that the first player (that is white) can ensure at least a non losing situation for him.

The solution is pretty interesting.

Assume the contrary, then it means that the second player (black player) has a winning strategy.

So all white does is he pushes himself into the position of black.

(How he does it ?) He puts out his knight (the horse) on the chess board and brings it back in the first move (here first move obviously refers to the first set of two moves).

See that he is now in the position in which the second player was at the beginning of the game, so he uses the winning strategy of the second player to force a victory.

All this problem needs is to identify the golden fact that : In any game of two players if one player has a strategy to put himself into the position of the other player then he can guarantee at least a non losing situation for him.

I remember hearing in a university lecture that there are certain instances of the chess game (with some modifications or in else way) where the winning strategies are heavily studied.

In fact it is one of the most famous open problems in the subject to find a winning strategy for the ordinary chess. (Do you know : Does it exist ? )

]]>

Let be a positive integer. Let be a by non zero matrix. Show that we can make an invertible matrix by changing at most entries of .

We give the official solution.

To solve it we will use induction.

You can read the comments on the question post. So instead of writing down the whole solution we will outline it.

It holds true if (HOW ?)

And then once true for an , we can use the (golden) fact that a matrix is invertible if and only if it has a non zero determinant.

And then we can arrange the entries and reduce it to the case of a matrix with changing only one entry.

This solves the original problem.

We received one more correct solution from Sourav Dey which was involving ranks of the matrix (but we don’t write it here as we plan to mention it later when we discuss ranks in our website as a broader topic).

We also received a question from our Mathematics Scouts team about the problem which is quite interesting and to my best knowledge still open.

Here it goes.

Discuss any algorithm to find the minimum number of entries we need to change to make a given square matrix invertible.

Frankly speaking, I don’t even know about the existence of any such algorithm but the question sounds quite interesting.

]]>

Let be a positive integer. Let be a by non zero matrix. Show that we can make an invertible matrix by changing at most entries of .

(Proposed by S. Rahaman, India).

We again thank every problem sender.

]]>