Jump to content

User:Linas/Lattice models

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Linas (talk | contribs) at 00:14, 6 September 2006 (Commentary: the function integral is key). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This page concerns a formal development of 1-D lattice models. Moved here from extensive discussions at User talk:Linas.

Intro

Consider the a one-dimensional, two-state lattice model, with the lattice having a countable infinity of lattice positions. The set of all possible states of the lattice is the set of all possible (infinitely-long) strings in two letters. Please note that this set of strings is a very large set: it has elements in the set. The cardinality of this set is the cardinality of the continuum. The (classical) Hamiltonian is a function that assigns a real value to each possible element of this set. The Hamiltonian is not the only interesting function; one may consider , and so on.

A minor problem arises: how should we label the points in this set? The set is not enumerable, so one cannot label them with integers. Maybe we don't need to label the points in the set, other than to say each point can be labelled by an infinite string in two letters. Let's punt on this issue.

In order to do statistical mechanics on this lattice, we will need to integrate over subsets of this set, or possibly integrate over all of the set. The problem now arises: how does one define an integral on this space? Since the points in the space are not countable, one cannot just say "oh, perform a sum", since a "sum", is, by definition, something that is defined only on countable sets. We need an analog of a sum that works on uncountable spaces. I am aware of only one such analog, and that is the machinery of measure theory. If there are other analogs, I do not know what they are.

However, the machinery of measure theory forces a radical shift in the language, notation and concepts used to discuss the problem. First of all, one can no longer employ the idea that the space consists of a set of points. There is a reason for this: not every subset of of an uncountably-infinite set is "measurable". In fact, almost all subsets of an uncountably-infinite set are not measureable. So the very first step of measure theory is to throw away almost all subsets of the total space. One keeps only the measureable sets.

How does one do this? Need to define a sigma algebra for this space. I am aware of several possible sigma-algebras for this space. I claim, without proof, that the algebra generated by the cylinder sets is the most "natural" for this problem. The reason for this will hopefully emerge later in the discussion. A homework problem might be to list as many different possible sigma algebras as possible for this space. A real bonus would be to prove that this list is a complete list.

In this new language, the Hamiltonian takes on a new and very different appearance. It is no longer a function from points to the reals, but a function from elements of the sigma algebra to the reals. Intuitively, this "new" Hamilton can be visualized or intuited as the "average" of the old, classical Hamiltonian, the average taken over a certain subset of points. However, this intuition is dangerous, and can lead to errors: in particular, one has the chicken-and-egg problem of how to define the "average" of the old, classical Hamiltonian. The new Hamiltonian is different, and cannot be constructed from the old Hamiltonian. However, there is a way to prove that the new Hamiltonian is a faithful representation of the old one. If one has choosen one's sigma algebra wisely, then for every possible point in the old set, there is a sequence or net (mathematics) of elements of the sigma algebra that contain the old point, and the measure of the elements in this sequence decreases to zero.

Suppose that the elements of this net are denoted by for integer n. The net is ordered so that whenever . The "new" Hamiltonian is some function that assigns a real value to each . We can then say that the new and the old Hamiltonians are equal or equivalent when, for the "classical" point p,

and

and

where is the classical energy of the point p which the net is converging to. Again: to be clear: the point p is a possible configuration of the lattice model; each and every point p corresponds to an infinite string in two letters, and vice-versa, in one-to-one correspondence. If a net can be found for all points p, and the above relationship holds for all points p, and it holds for all possible nets to the point p, then we can honestly and truthfully insists that the "new" and the "old" Hamiltonians "are the same".

The above sets up a rigorous and mathematically precise vocabulary for the further discussion of the problem. Let me know if you have any questions, or if anything was unclear. In the meanwhile, I will contemplate how to present the next steps. linas 21:32, 3 September 2006 (UTC)

Dear Linas. Yes, I think I am with you so far---very clear. (I was just going to add that the above should be true for all such nets to the point p, but then you got there first.)

The sigma algebra

OK, next installment: the explicit construction of the sigma algebra. Let the positions on the lattice be labelled by an integer n. The elements of the subbase of the sigma algebra can be completely enumerated by ordered pairs (n,s) where s is a string in two letters of finite length k, for integer . Visualize the pair (n,s) as the set of all states where the lattice values between location n and n+k-1 are equal to the string s. Use to denote an element of this sub-base.

The subbase of a topology is a collection of sets, which, by intersection and union, generate the rest of the topology. A brief review of intersection and union is in order. The intersection is the set of all configurations that match s at n AND match t at m. The union is as above, with OR taking the place of AND.

Let the two letters be "A" and "B". Then for example,

and

and

with representing the entire space. One notworthy aspect of this topology is the somewhat "backwards" relation between the string labels and the sets, in that when two strings overlap, its is likely that the intersection of the corresponding sets will be empty, whereas when the strings don't overlap, the intersection will never be empty. Thus, for example,

whenever . (I believe that this property will be responsible for some of the "fractalishness", to be defined and discussed later.)

OK, with you so far. This topology seems pretty natural.
At some point, we should list all other possible topologies.
Agreed.

The measure

The measure is a function that assigns to each element a real, non-negative value. We'll normalize the measure such that

and

The requirement of sigma additivity implies that all other values will be less than one. A measure will be said to be translation invariant if

for all . We can use sigma additivity to construct a collection of translation invariant measures as follows. Let

for some . Then sigma-additivity requires that

which follows from the fact that the union of these two disjoint sets is the entire space. One may readily deduce that

where is the number of times the letter 'A' occurs in the string s, and likewise for . In most physics applications, the canonical and symmetric choice would be , but it should be clear that this is not mathematically constrained. One might even be able to make physical arguments to have x be something other than one-half, if, for example (and maybe a bad example), some external force is causing there to be more spins pointing in one direction than the other, on average.

Lets pursue this idea just a bit further. Let

be the set of all strings of length k. Then, one has , that is, this set of strings defines sets that are pair-wise disjoint. One has

and, from sigma additivity, we deduce

The appearance of the binomial coefficient is another hint that there might be something fractal around the corner. The binomial coefficient has many relations to fractals; for example, of one considers their values modulo a prime p, then Pascal's triangle takes the form of the finite-difference version of Sierpinski's gasket. Sums over rows i.e. generate the Batrachion curve, which is the finite-difference version of the Takagi curve -- which has a fractal, dyadic symmetry. Yes, this is a stretch at this point in the game, but the powerful p-adic fractal nature of the binomial coefficient should not be overlooked or under-estimated in its importance.

OK. Well we will see if it crops up later.

Other translation-invariant measures

The partition function, to be constructed later, will be seen to be another translation-invariant measure, not taking the form above.

Non-translation-invariant measures

For completeness, here's an example of a non-translation-invariant measure. One may assign

for some arbitrary sequence . One then has

To get the measure on the remaining elements of the subbase, one uses a multiplicative construction, requiring that

and

I believe this measure is fully self-consistent. I think more general measures are possible, combining the above with partition-function-like measures.

Some general remarks

It should be noted that, up to this point, the theory developed so far is more or less isomorphic to that of a certain class of Markov chains, and also for subshifts of finite type, and to a lesser degree, but more generally, measure-preserving dynamical systems. I don't want to pursue these relations just right now.

FWIW, having a measure is more or less sufficient for defining entropy, for example, the Kolmogorov-Sinai entropy. If one has a metric, one may also have a topological entropy. These are all related to the entropies that can be given on Markov chains and in information theory and of course stat mech in general; the problem is to a large degree a problem of wildly varying notation. Perhaps we'll explore these later.

We'll also have to do partition function.

Entropy

Let be a partition of into k measurable pair-wise disjoint pieces. The information entropy of a partition Q is defined as

I believe that the correct definition of the "measure-theoretic entropy" is then

where the supremum is taken over all finite measurable partitions. I'm not sure, I'm guessing that

or something like that, for the translation-invariant measures given above, although I'd have to do some thinking to verify that this is correct. Anyway, its gives a general idea.

The metric

Punting for now, until we need it.

OK. I don't really want to preempt your next installment, but I do have two comments so far.
(1) I guess you will explain precisely where the metric comes into the definition of the ising model, so I won't preempt that discussion at all.
(2) You haven't yet defined the p-adic metric, so I should probably wait. But...suppose I have two strings, s^1 and s^2, defined by , where , and i=1,2. I think by the dyadic map we both mean an assignment of the numbers
to these strings. Will the dyadic metric be something like
?
If not, please do ignore me and continue! If so, my comment is that this is not quite the same as looking at how many letters match up, and how many letters differ, which goes back to what I said way above: {...001} is near to {...000}, but {100...} is *not* near to {000...}. Moreover, all the states like {011110} will be nearer to {000000}, the reason for this discrepancy being the weighting of in the definition of .
I suppose in the finite N case, one might prefer the metric
, or even
, if one doesn't care about the ordering of the spins. I can see there might be problems with this in the limit , so perhaps you are about to convince me that the p-adic metric is the best one can do (as far as looking at how many letters match up), in the infinite limit. Or perhaps I have jumped in too early, and the above is not where you are aiming for.

UPDATE Ah, I can see I may have conflated some different ideas above---at least I've reminded myself what the p-adic norm is. Well, if the comments turn out to be relevant to what you want to say, all very well. Otherwise, I will let you carry on with the exposition before trying to second-guess you. --Jpod2 16:38, 4 September 2006 (UTC)

The Hamiltonian

Next step: write down the value of the Ising model Hamiltonian for the subbase elements . The classical Hamiltonian for the Ising model is

Here, the sum over i is a sum over all lattice positions i. At each lattice position, there is a spin having a value of +1 or -1. The nearest neighbor interaction energy is J, with sign such that when spins are aligned, the energy is lower. The magnetic field is B. The argument s is understood to be a bi-infinitely long string in two letters. Let letter A be and B be .

The quantum Hamiltonian will be a set of values defined for the subbase elements . I'm calling this the "quantum Hamiltonian" as it is meant to be a mathematically rigorous formulation of a functional integral or Feynman path integral. (It differs from the Feynman path integral by not being weighted by a factor of . Later on it will be seen that is weighting factor is essentially the partition function, and can be folded into the measure (because its sigma-additive and obeys the other axioms of a measure)).

For notational convenience, let

to avoid typing so many parenthesis.

The first important property of the quantum Hamiltonian will be translation invariance. That is, one will have

for all integers m, n. It may sound trivial, but this is important: the translational invariance already fulfills one aspect of what it means to be "fractal" or "self-similar": when a function "over here" looks like the function "over there". Now, an objection might be that a sine wave is self-similar in this sense, and no-one calls sine-waves fractal. To get the remaining aspects of self-similarity, we also need scaling as length scales are changed.

Without further ado:

The total space is normalized to zero energy.

and

and

The rules for creating this list are simple enough. Define

= number of aligned pairs minus number of opposite pairs

where, by "pairs" I mean "pairs of nearest neighbors". More formally, let V be the function that looks only at the letter in position zero, and the letter in position one, and returns +1 if they are the same, and -1 if they are different. Let be the shift operator on the lattice, which takes (n,s) as an argument, and returns (n-1,s). Then

Letting #A and #B be the number of letters A and B in the string, as before. Then one has

It is presumably not hard to see that as the string gets longer and longer, this approaches the classical Hamiltonian, thus fulfilling one of the requirements that the classical and quantum Hamiltonians correspond. (I notice here that I seem to have a normalization problem, to be resolved: the classical energy will be infinite unless there are equal numbers of A's and B's in the infinite string, and equal numbers of transitions. That's bad: and worse, there are conceptual difficulties with the question: what does it mean to have an "equal number of transitions"? I think a strong case could be made that, for the infinite lattice, the "classical Hamiltonian" cannot be rigorously defined, except as the limit of the quantum Hamiltonian.)

Anyway, I note that, by increasing the length of the string by one letter, one has the identity:

I notice that this relation accounts for some of what I've been thinking of as being "fractal" in the back of my mind; I won't go into why it seems this way just yet.

Anyway, (1) I have to go to work, and (2) I'll have to think a bit on how to best present a case for scaling in this example. I'll hand-wave a bit now: as one changes scale on the lattice, one wants to compare energies to those of similar-but-averaged lattice configs. I've never attempted to do this before, and have only the vaguest memory of having heard of such a thing in some lecture -- and so may have to stumble about a bit. I hope it works out :-) I also encourage you to try making a scaling argument on your own.

Commentary

I'm not going to write much here, but I've read and I think understood what you are doing so far. I'm not sure where the p-adic metric will come in, yet, but I will bite my tongue until you've completed that part of the programme. ALl the best--Jpod2 19:55, 4 September 2006 (UTC)
Well, there may be irrelevancies :-) I'm not sure why I wrote about the measure. Other than to illustrate that on general principles, that this topology is measureable. It may still come in handy.
Hi. Well, it all seems quite clear so far, it's a good exposition. I need to think a bit more about it, but for now...
(1) I think the terminology `quantum hamiltonian' is maybe a bit confusing. For me, your hamiltonian is best thought of as the appropriate classical hamiltonian in the infinite-N limit. There may be some subtleties, but isn't it quite analogous to field theory? Where one has the hamiltonian
formed from the hamiltonian density . I realise I'm not formulating that too rigorously, but it seems like an appropriate analogy, with your `classical hamiltonian' and your `quantum hamiltonian' H(n,s) being something like . (Hmmm, i've just edited this, and the analogy maybe isn't quite right, but anyway.)
Conversely, I would use the term quantum hamiltonian either to mean the operator on some appropriate hilbert space in the quantum theory, or else the vacuum expectation value of this operator i.e. something like
,
where the integral is over the space of states.

Oh gosh, its supposed to be a mathematically rigorous version of a Feynman path integral or functional integral. I can sketch it so that maybe it looks more familiar, like so:

which should resemble the maybe more familiar form

i.e. integrating over everything, except at the n'th location, where the field is held fixed. This is a key concept behind the whole discussion.

Anyway, it might be just an issue of terminology---but I thought it better to be explicit in case I am misunderstanding something in what you mean.
(2) on the translation inv/scaling/self-similarity. I agree that requiring translation invariance is appropriate, but i'm not sure this will lead us to fractal-ness. Remember the discussion we had over on the scale invariance pages. We were considering functions satisfying:
either for all dilatations, (scale-invariant functions), or else a discrete subset (self-similar functions). In both cases there are as many continuous functions as you like satisfying this kind of condition, but which are *not* fractal.
I guess I'm saying I wouldn't be surprised if there is some form of self-similarity/scaling, for sure (e.g. the blocking transformation of the real space RG), but whether actual fractals (e.g. the ?-mark function) appear would be the interesting point. We will see...one other comment is that I think the sections above where you comment on self-similarity, the arguments all hold in some form in the finite-N case, don't they? But perhaps the conclusions will be different in the infinite-N case, I'm not sure.
"I have to go to work,"
I'm glad one of us does something useful! :)