Years ago, my mother gave me a gift of a simple battery-powered alarm clock that was designed like a Rubik's Cube. Only the top layer would twist, and it was used as a knob to set the mode of time, alarm, temperature.
I used it daily for many, many years, but it was inherently infuriating and shameful, as are all gifts from my mother. The Cube design it carried was permanently jumbled. There were stickers all over it, in standard Cube colors, representing its jumbled state, and of course, since it was a clock and not configurable, there was no way to match the colors and "solve" it.
I was forced, every day, to stare at a permanently unsolved, unsolvable Cube, after I had mastered it so many years ago, algorithmically, and I was able to solve a standard Cube in 63 seconds, but I could do nothing about this infernal clock.
Well, I wasn't forced. Sure, I could just stop using it. I could purchase another clock that wasn't shameful. But you know how mother-son relationships can be. Anyway, I finally destroyed it with great satisfaction, and the Unsolvable Cube troubles me no more.
Of all the cubes in the world, there will always be one, his mothers gift, that is forbidden for him to solve.
This is a great mystery. It’s a Rubik’s complex.
The stickers were not individual, separate colors, they were large panels made up of the already-jumbled colors. In fact, I'm not even sure they were stickers, or if they would come off easily.
Regarding your questioning of why I smashed the clock, it was not merely because I couldn't "solve" it that I smashed it.
> Were there 9 of each color on the visible faces at least?
I never checked that closely. It is apparently still available (see link above) so you may be able to find enough images to piece it together.
However, two areas lack color: the display, which is 3 horizontal tiles in the center of the "front" face, and the bottom, where the battery compartment opens, which is unpainted black plastic.
So no, even if you could move the stickers around, there aren't enough for coverage, and why would you cover the display face? I mean, do you want a Rubik's Cube or an alarm clock in this bargain?
I wouldn't have a problem with the scrambled state of the clock, so I might be less neuro atypical than OP, but that it is unsolvable at a glance (two reds on the same block) even if it could be manipulated like a real cube and that there are no black bars between the colored blocks of the top row would drive me equally mad.
I'll admit that when I saw the title I thought to myself "yes, I used to brute-force Rubik's Cubes all the time when I was a kid by taking them apart and reassembling them into a solved cube."
To be pedantic, we can "alter the sequence of colors" of a solved state into a solvable state, so it's not quite QED.
Anyway, I think you're agreeing with the person you're responding to; they're suggesting it's more fun to peel and re-stick stickers precisely because that's a way to achieve states that even mechanical disassembly can't solve.
Wasn't I already being pedantic enough about "Or peeling two stickers off a corner piece and swapping them to change the parity and make it impossible to solve."?
If you really want to make it physically impossible to solve and frustrating, swap two stickers between two cubes so they both have the wrong number of two colors, especially annoying with two colors on different faces of the same piece.
I think the parent commenter's point was that if you change stickers, you can't solve the cube, even if you twist corners. But if you twist corners, you can still "solve" the cube by changing stickers.
i.e. changing stickers is "more powerful" than twisting corners.
Indeed! I especially liked the exploding Rubik's cubes! - you know, the cheaper ones that you had to put Vaseline onto make them twist easier. Therein lies the problem, if they jammed mid twist, they ran the risk of spontaneous disassembly! :-)
> My initial implementation worked at a whopping 200 permutations per second. That’s incredibly slow, and meant that it would take well over a century (in the worst case) for my program to finish. Now, it works at about 190,000 permutations per second, with an estimated worst-case search time of 2 months. (I haven’t encountered a scrambled cube position which has taken more than 10 hours.)
> Instead of 10^20 moves to find connecting path(s) towards Solved in a Rubik's cube, this algorithm solves in
2*10^10 from each side (and IIUC the solution side (1*10^10) can be cached?)
Interesting video. The approach in it is also inspired by meet-in-the-middle, but suffers from a significant deficiency: It requires a lot of memory—90 GiB—enough that the author had to borrow their university's compute cluster to conduct a solve, even when written and optimized in C++. Likewise, their "triple DES" presentation discussed another way that would require a database that's terabytes in size (specifically, memory that's O(N^0.5)). Nonetheless, I think it's a good exposition of meet-in-the-middle.
What's neat about the method in the post is that it requires O(N^0.5) time and O(N^0.25) memory, which allows running it on an entry-level consumer laptop (hours of time and a few GiB of memory) when implemented even in a dynamically typed, garbage-collected language.
> It requires a lot of memory—90 GiB—enough that the author had to borrow their university's compute cluster to conduct a solve
I remember something like fifteen years ago we needed a 128GB computer to solve IC modelling (timing closure) problems. While this was reasonably available, the only supplier we could find was gaming-oriented, so we had a blinky light tower PC in the corner of the server cupboard.
Anyway, never underestimate the power of a single computer, especially with a GPU.
It's also not hard these days to fire up an instance on AWS or other cloud service which will have up to a few terabytes of RAM.
Plus, you can always just bite the bullet and do it all out of swap - might not be as bad as you fear, and often can optimize whatever you're doing to be more online. There are countless tricks left over from the old days for doing out-of-core operations efficiently, whether disk or tape.
If you're firing up anything on AWS with that kind of RAM, you want to make sure you understand how much that's going to cost. The per second cost is always going to look really low and you will tend not to worry about it. However, once you do the math, you could find that you're easily and quickly spending more money than it would cost to buy a machine to run locally with that much RAM.
It's been fine when I've used terabyte instances for dynamic programming stuff similar to OP. You are babysitting it and waiting for a specific answer, not running a permanent service which will hemorrhage $$$ when you forget to turn it off. At dollars an hour, hard to forget...
The generic algorithm he describes for significantly paring down and brute-forcing a permutation space is fascinating -- in particular, the insight that (in the worst case) only sqrt(N) iterations are needed to brute-force a permutation space when states in that space have lexicographically sortable representations.
I love the simplicity of using lexicographic sorting to solve a problem -- even sorting things like ISO-8601 date strings is delightful to me :-)
Binary digit strings are lexicographically sortable. These are the essence of computer representation of information.
What's an example of a state space that lacks a lexicographically sortable representation?
Edit: in fact, "lexicographic" is an irrelevant detail in the blog post. It's just mentioned as a convenient way to define an ordering on the states. The sqrt(n) speedup is from having any ordering at all. As the article notes, mergesort relies on the same feature.
Lexicographic ordering of the sets X and Y is crucial in the problem described in the post because it admits an efficient algorithm for iterating through X*Y without needing to enumerate them prior. It would not work for e.g. binary-on-disk ordering, since algebraic laws don't carry over in that ordering.
If I'm not mistaken, this algorithm doesn't result in an optimal solution.
You can see that in the example in the end, a random cube resulted in a 20 move solution, which is the upper bound for any solution [0]. There are not many cube configurations that require such a long solution and the chance that we hit one randomly (and solved it that quickly) is incredibly small.
In order to make this algorithm optimal, you'd need to do iterative deepening, trying to find solutions of length 1, 2, etc. up until 20.
Good point. I think also it would find the optimal (in the worst case time) if it continued to search for shorter solutions over the entire space once it found one. If it happened to find a solution of length 14, for instance, it could stop considering those of 15 or more, so you would still avoid the 2 month waiting time (same can be done with DFS). Though come to think of it you probably mean to do this with iterative deepening by not throwing away the intermediate structure (like BFS).
What I really appreciate about this article though is the solid presentation of math connected with computer science and the author throwing around multiple programming languages. They don't even see the code anymore...
At the moment the algorithm seems to only check for exactly 20 move solutions, so it won't find them easily.
You would however find the optimal solution followed by some silly back and forth that can be identified, as long as the optimal one isn't 19 moves.
There is no need to do iterative deepening as you suggest.
The move set L, or more precisely
L := C^5 U C^4 U C^3 U C^2 U C^1 U C^0,
contains all non-redundant moves sequences of length between 0 and 5. L has 621,649 elements. See e.g. the function that produces this [1].
Solutions thus are found between 0 moves and 20 moves inclusive.
If you ask the algorithm to solve a cube with the front face turned, it will immediately output the four words:
"", "", "", "F".
It will also return every other quadruplet of non-redundant moves that achieve the same state if you keep the solver running, for example:
"", "F'", "F", "F"
(Here, "non-redundant" means there are no obvious simplifications within each word.)
The algorithm is guaranteed to enumerate all non-redundant solutions of bounded length, which includes all optimal solutions, in the same worst-case running time of O(N^0.5) and memory complexity of O(N^0.25). The Common Lisp implementation does return on the first match, but that's an implementation detail, not a limitation of the algorithm.
Lastly, solutions whose length is short typically show up first, since the words "" * "" correspond to the identity permutation, the lexicographically least permutation, and hence the first permutation found in the search.
My research is on the more general form of this problem (the moves of a rubix cube form a group, can we get from one place to another using a group?). There are good algorithms for doing this, and other computational group theory problems, but they are very complicated. Also, they don’t give you the best answer, just any answer. I’ve been considering writing about them, but it would probably be a fair chunk of a book!
EDIT; after further thought, my stuff is less applicable, because every cube of a rubix cube is unique, while my personal study is in moving things around where some values are the same — while it is true that you can view a side as “9 indistguishable yellow squares”, they aren’t really, as the cubes they are on can each be uniquely identified by their coloured sides :)
Just to note, the algorithm in the article applies to any permutation group, optimally solving the generator factorization problem in O(N^0.5) time and O(N^0.25) space provided a bound on the diameter of the group is provided.
Yes a "look" is basically determining what algorithm to apply to the cube based on the current state.
1 look last layer means that after the first two layers are solved, you would know an algorithm to apply to solve the cube based on the state of the last layer.
With the typical speed solving method (CFOP), the last layer requires 2 "looks" (one for orientation of the last layer, one for the permutation)
Standard speed cube method uses 9 “looks” each followed by an optimized algorithm. This development (1LLL) combines the final two looks (each normally requires choosing one of roughly 40 unique algorithms ) into one monster step with 4000. The number is big because there’s many possible embeddings. Each look is only at the next subsection of the cube you plan to solve.
I will list all the cross-applications I have found for Rubik's cube solving to other disciplines, ordered from most to least useful:
1. It has an application to the discipline of waiting. This is easily it's most powerful application.
2. It has applications to group theory, and having a good intuitive understanding of rubik's cubes really does help with some groups and permutations. It also imparts some other small amount of math-related intuition.
3. It has applications in being able to connect with other cubers, which can every once in a while open doors. It's rare, but sometimes you'll meet someone, and cubing will be a part of how you connect, resulting to a good social outcome. It has applications, in that way, to the discipline of socializing
4. It has applications in being able to appear smart to some small subset of people (akin to wearing glasses, reading a book, and other small signifiers). "Appearing smart" is usually not a very useful discipline, but at times it is.
5. It has significant applications to the art of getting bullied.
6. It can greatly benefit one in their discipline of "being annoying". Cube loudly for greatest effect.
The problem of solving a Rubik's Cube is an instance of a more general problem called "factorization": Given a permutation group G which is generated by permutations g_1, ..., g_k, written:
G = <g_1, ..., g_k>
we seek to represent an element s in G as a short word over the generating set. That is, we want to write
s = g_a * g_b * g_c * ...
for some (a, b, c, ...) we must discover.
In Rubik's Cube speak, that's taking a scramble and finding a short sequence of moves that solves (or equally, reconstructs) that scramble.
A lot of problems can be framed as problems in finite group theory, especially in quantum physics and quantum information theory. For instance, to characterize the performance of a quantum computer, we might run a routine that executes a long sequence of so-called unitary operations from a special group called the Clifford group. But in order to run the characterization routine, we must be able to express their product of the sequence as a short word of generators—a problem identical to the Rubik's Cube solving problem.
So the Rubik's Cube gives us tremendous insight into what kinds of methods may or may not work in practice, since it's a non-trivial group that's large but tractable, abstract but realizable as a plastic toy.
That's all for exploring solutions mathematically and/or computationally. As for learning to solve a Rubik's Cube by hand? Not sure it's practical for very much, but it is pretty cool.
The 2010 result is a proof that all cubes can be solved in 20 or fewer moves in the "half-turn metric". TFA is using "brute-force" differently: how to solve a Rubik's Cube without special knowledge about the cube, aside from how its sides turn.
Of course it does it without knowing what kind of plastic, metal, or wood the cube is made of.
Anyways, since it seems you misunderstood the article slightly - the 2010 brute-forcing was done with the help of a datacenter. The new guy estimates his method would take ~2 months on a single machine, maybe less, maybe more - it's a post, not a peer-reviewed paper.
Purely in principle, it shouldn’t be hard to solve a Rubik’s Cube with a computer, right?
Our program would have three parts:
1. A model of the Rubik’s Cube, that is, some data structure that represents a cube state.
2. Some functions which can simulate turns of each side.
3. A solving procedure which takes a scrambled cube, tries every possible turn sequence, and stops when solved.
It is more efficient to travel along the Hamilton circuit and visit each state of the cube. When you reach the solved state you stop traveling.
Bruce Norskog's cycle would work, but it will require executing somewhere around 43,252,003,274,489,855,999 moves (at worst) to arrive at the solution to a given scramble. :)
This is super interesting! The meet-in-the-middle approach described in the article reminds me of the direct multiple shooting method to solve boundary value problems,
I figured out how to solve a 3x3 over a period of a few months. I was the passenger in a carpool for 1.5 hours each day so I just kept at it. Figuring out the move pattern for swapping the corners on the bottom layer was found by remembering and doing different move patterns until it worked.
That must have been a rewarding experience! I made the mistake of memorizing the solution when I was a kid so solving the cube feels like a routine instead of actually solving a puzzle.
My (young) kids got interested last year, so I learned the algorithms to be able to put it back for them.... And got kinda hooked myself!
For me though, I don't have the sequence memory to be able to work it out from scratch. I have the logic, but have to write down each step and it gets very very slow.
Learning the "algorithms" from online courses was like learning ballroom dance, for me, and learning increasingly quick and specific solutions for different stages is still satisfying. It's a physical / slight mental challenge, rather than a puzzle to work out.
If you want to recapture that feeling, look for Rubik-like puzzles in different shapes and work them out on your own. Pyraminx, Megaminx, Skewb, Square-One, Helicopter Cube, Gear Cube, Dayan Gem are names to search for among many others, or even just the bigger or unusual cuboids (like 3x3x4) if you've never done those.
Same for me, except I was forty-something. I captured something of the experience after stumbling on the “duncan beach ball puzzle”.
After memorizing the moves for the cube I read a lot about how a person could solve the puzzle (and ones like it) from scratch. So then I went looking for other permutation puzzles to try the technique.
The technique was basically do some sequence of moves, then one move, then undo the sequence. After that notice what 2 or 3 elements swapped places in the permutation. Everything else remains the same.
This was easy enough with the beach ball and the classic 15 puzzle. Give it a try!
Exactly the same experience and feeling for me. Taught myself how to solve the cube during the pandemic but now all I know how to do is solve a cube in a specific order.
It takes the same amount of time every time and it doesn't actually feel like I'm solving it but instead putting it back to its original state.
Someone was curious as to my method so I made a YouTube video. After a year or so of not solving I was rusty on my method, so my YouTube video taught myself how to do it again.
Yes. As I learned as a young kid, you can pop off the individual cubes with a screwdriver and reassemble in a solved state. I consider this method “brute force”.
Natural selection is pretty much the opposite of heuristic. Complex adaptive systems embodied in self replicating chemical nanobots create random variation when they replicate. The varieties that can't reproduce or not as efficiently are dead ends. There is no teleology, and conditions change in non linear ways. There is no final condition, or any meaningful way to infer a better pathway in advance.
Evolution by natural selection is literally just biochemistry brute forcing survival.
Natural selection constrains the possible variation for each iteration so only a miniscule subset of possibilities is explored. That doesn't sound very brute-forcey to me.
It's not the selection that pose constraints. The random variation that occurs in the replication is going towards all possible adjacent possibilities spaces, by definition. Selection occurs only after this brute force, blind branching towards anything that can be changed at that stage.
Right, but per the article, Korf’s algorithm takes hours and Thistlethwaite’s algorithm doesn't necessarily find the optimal solution. I wonder if a self-learning AlphaZero approach could result in finding optimal solutions in seconds.
I used it daily for many, many years, but it was inherently infuriating and shameful, as are all gifts from my mother. The Cube design it carried was permanently jumbled. There were stickers all over it, in standard Cube colors, representing its jumbled state, and of course, since it was a clock and not configurable, there was no way to match the colors and "solve" it.
I was forced, every day, to stare at a permanently unsolved, unsolvable Cube, after I had mastered it so many years ago, algorithmically, and I was able to solve a standard Cube in 63 seconds, but I could do nothing about this infernal clock.
Well, I wasn't forced. Sure, I could just stop using it. I could purchase another clock that wasn't shameful. But you know how mother-son relationships can be. Anyway, I finally destroyed it with great satisfaction, and the Unsolvable Cube troubles me no more.