1
$\begingroup$

I am currently doing self-consistent calculations for various parameters $\Delta_{i}$, but i run into the problem that I have a U(1)-symmetry break, so when I reach the minima my simulations start going 'around a Mexican hat' making it so that when i look at successive differences it counts as not converging(a plot of the phases show that they increase linearly but the norms remain the same). enter image description here I can't just focus on the difference of the norms, because the phases of my order parameters matter to identify if the simulations converged or not (sometimes they don't change linearly, but they show erratic jumps, which indicate that the simulations don't properly converge, as shown below). enter image description here In summary, I would like to have suggestions for a 'tolerance parameter' that is capable of characterizing both the convergence with linearly changing phases and the divergence with erratic changing phases.

$\endgroup$

1 Answer 1

3
$\begingroup$

Convergence Criteria

Write down all order parameter components as a vector $\Delta = (\Delta_1, \Delta_2,...)^T$. This defines a complex vector space (a Hilbert space), where the vector corresponds to a mean-field configuration. The norm $|\Delta|$ of the state vector corresponds to the overal mean-field amplitude, while the global phase corresponds to the spontaneously broken $U(1)$ symmetry. In this space, the inner product $\Delta \cdot \Delta^\prime$ between two vectors measures how similar two mean-field configurations are, but you must ensure you account for their normalization.

Therefore, if $\Delta$ and $\Delta^\prime$ is the mean-field configuration before and after an iteration in the self-consistency loop, the quantity $$p = \frac{\Delta \cdot \Delta^\prime}{|\Delta| |\Delta^\prime|}$$ directly measures how much the relative weights in mean-field components changed in this iteration. Note $0 \leq |p| \leq 1$ always, and $|p| = 1$ if the relative weights are left unchanged. In this later case, two things might still have changed in the mean-field configuration: its norm and its overall phase.

The phase is not important for self-consistency: if $\Delta$ is a self-consistent solution, then $e^{i \phi} \Delta $ is also for any real $\phi$, since they must have the same free energy if $U(1)$ is broken only spontaneosuly. The norm is important, though, as there is no symmetry guaranteeing states with different norms have the same energy. So we also need to ensure $|\Delta| = |\Delta^\prime|$

Numerically, you need tolerance parameters $\epsilon_p$, $\epsilon_{\text{amp}}$, and your convergence criteria are $$1 - |p| < \epsilon_p,$$ and $$||\Delta|-|\Delta^\prime|| < \epsilon_{\text{amp}}.$$

Once these criteria are satisfied, the state $\Delta^\prime \approx \Delta$ is an approximation for a fixed point of the self-consistency equation. Note that any $e^{i \phi} \Delta $ is another equivalent fixed point.


Phase Bias

As a side note, maybe the fact that the phases are so clearly increasing with iterations suggests that there is some phase bias in your self-consistency equations? Perhaps there is a way to rewrite them to remove this behavior?

By phase bias I mean the follwing. Supopse your self-consistency equation (SCE) has the form $$\Delta = U \langle c c \rangle.$$ Then, in the process of implementing the equation, suppose something goes wrong, and by accident you implement $$\Delta = e^{i \phi} U \langle c c \rangle,$$ where $0< \phi \ll 1$ is some small angle. Then, every time you compute an updated value for $\Delta$, you get this extra phase $e^{i \phi}$ on your state (a phase bias). If $\Delta_0$ is a solution to the original SCE, it will not be a solution to the modified SCE. In fact, there will be no non-trivial fixed point to the equation, since plugging the expectation values from $\Delta_0$ on the right will give you $e^{i \phi}\Delta_0$ instead of just $\Delta_0$. This is the behavior you're finding on the right panel of your first figure, where the phase of the $\Delta$ components drifts one way.

You can modify your SCE by hand introducing another $e^{i \phi^\prime}$ on purpose, and study the behavior for different $\phi^\prime$. My guess is that you can find some $\phi^\prime$ for which the drift I mentioned above cancels, and you finally get the fixed points you wanted.

One possible origin for the strange behavior of phases you're seeing is using the complex conjugate or transpose of some matrices in the way you calculate the expectation values.


Oscillating Phase

For the case where $\Delta$ oscillates between multiple configurations $\Delta[n]$ with subsequent iterations we need to generalize the convergence criteria above. If we want to capture an oscillation between up to $N$ values we need to store $N$ reference configurations. At each iteration, instead of storing the last configuration only, we store all the $N$-last configurations from previous iterations. Denoting these by $\Delta[n]$, where $n=1,...,N$ denotes the $n$th-to-last iteration, we calculate the values $$p_n = \frac{\Delta[n] \cdot \Delta^\prime}{|\Delta[n]| |\Delta^\prime|},$$ where $\Delta^\prime$ is the value after the most recent iteration.

The stopping criteria are then $$1 - |p_n| < \epsilon_p,$$ and $$||\Delta[n]|-|\Delta^\prime|| < \epsilon_{\text{amp}}.$$ If both of these are satisfied for a single $n$, the current order parameter configuration is too similar to another recent configuration in the loop, and this signals the oscillations you are concerned about.

Notice this generalization is useful for saving computational time (i.e. not getting stuck too long in a loop that leads nowhere), but the resulting configurations have no immediate physical significance that I can think of.

$\endgroup$
7
  • $\begingroup$ hi, thanks for your answer. what do you mean by 'a phase bias'? $\endgroup$ Commented Oct 31 at 10:06
  • $\begingroup$ Also, could you give me a bit of an insight as to why this 'p' is a good parameter that can help me get around this issue I have $\endgroup$ Commented Oct 31 at 10:10
  • $\begingroup$ I edited the answer to elaborate on those points $\endgroup$ Commented Oct 31 at 17:04
  • $\begingroup$ Hi, i read your comments. I am not sure your 'p' parameter helps capture the fact that simulations don't converge sometimes because of the phase. what i mead is, picture a situation where $\Delat_{i+1}=-U \Delta_{i}*c$, where $c$ is positive. for positive $U$ the solution does not converge because the sign changes constantly between iterations(as it should, as an example, in superconductivity an on-site repulsion doesn't stabilize cooper pairs). I am searching for a parameter that is able to capture the fact that this scenario is possible $\endgroup$ Commented Nov 3 at 5:25
  • 1
    $\begingroup$ I was trying to look for a criteria that managed to detect this oscillating scenario before reaching the maximum number of iterations. This is because time and resources are spent on iterations which will not yield any meaningful results(because some simulations physically should not converge). $\endgroup$ Commented Nov 3 at 8:10

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.