I don't understand the pen/physical notebook thing. It's slow to write, insanely slow to search what you've written, almost impossible to copy or share.
Funny. I was tolling the virtues of my spiral notebook which is always on my desk to a coworker, whom I was trying to get out of some trouble and i got... silence. That notebook is the difference between said coworker and I. She didn't get it.
That notebook is the fastest most accessible tool to capture my thoughts. I can state concisely what was discussed in a team meeting last october before any notetaking tool boots up. I know kids names, birthdays and favorite movies of almost all my coworkers which I can glance without switching windows while sharing my screen.
the impossibility of such content being legible to anyone else and being shared is a feature I value very highly.
i take it where i want, introspect, take notes. No screen. No distractions. I doodle, draw lines, write jokes, whatever.
Pages on the right are work things. On the left are my ideas. Index on the first page tells me exactly where to turn to for that idea and mockups i had on filling tedious forms.
I know when I'm spinning my wheels. I can see the gaps in my thinking from months ago. I can see patterns in human behavior that I would otherwise have not noticed.
The simplicity is a huge advantage. I stopped looking for anything better. I don't try to promote it
either (except on rare occasions). Saved me hours.
I think an analog note taking device (woah!) requires more discipline than a digital one. In the digital world, you can always re-arrange your chaos at almost no cost, whereas you are screwed in the analog world. I'm curious: How do you organize your notes? If you mind to share.
First there's a work notebook and a personal one. both are semi structured, but work one has more structure.
first page is always for index, split vertically and you get roughly 80 entries with about 3-4 words each. one for each page that i number as i go writing through the book. index usually has just pointers to days when i get interesting ideas.
each right page is for 1 week that i label up top. a small margin is dedicated for recurring meetings. using only the right side allows quickly flipping through. rest of the page is for todos and logs.
left pages are for related thoughts, ideas, etc from that week. sometimes overflows but not by much.
For wordy stuff, i use pages from the back of the book. I don't use sentences much. Just few words with lines connecting them.
Thats pretty much it - index, numbered pages and using just the right side per week. It works wonders for how simple it is. You'll find something else that works for you. Don't overthink this. Just start with a structure and let it evolve.
side note - i do use a folder for plain text notes on my work laptop (that i keep open in sublime) for links and text that benefits from copy and paste. i would not care if it all got deleted or leaked out to the world. I also have another folder of interesting bookmarks and articles exported to pdf for reading on a flight on my phone. i have a dozen or so google docs with my thoughts on topics i'm interested in.
> You'll find something else that works for you. Don't overthink this. Just start with a structure and let it evolve.
I'm currently looking at a stack of lovely "Leuchtturm1917" A5 note books. The way I took notes for the last three years has been chaotic, that's why I was interested in your way of doing them. I think, finding a mode of work and let it grow and iterate over time is the way to go. I see room for improvement with regards to how I structure my notes, so that I can find them again - that's what I'm struggeling with in my analog world. So, thanks for your insights, they are really valuable!
What do you do when you make a mistake? Do you use an eraser instead of backspace? That is my major problem with notebooks.
I type much faster on a keyboard, I can read it even a month from now while I can't do that with my own handwriting, and when I make a mistake, I can quickly and easily correct it.
I do almost no organising of my paper notes. The only thing I do is that I add a date to the corner of the first page of when I start making a specific note and I keep index pages where I list page/note titles (or topics, themes, not everything have a title) and a page number.
I often browse my notes even when I'm not looking for anything. I read what I've been thinking previously because that often sparks new ideas and thoughts.
One thing where I find pen and paper superior to digital is that it's easy to write in the margins, draw arrows and annotate. When I got my first iPad and tested out digital notebook tools (with stylus), I was excited about the idea that I can resize and move my existing drawings around.
Then it took me a few days to notice that I don't really ever need that. I don't need my "finished" notes to look tidy or good. I got over the need to have organised and structured notebooks and embraced the chaos.
I guess it's different things for different people. For me, the flexibility of paper is superior to any digital solution because it has the shortest "input lag" or "feedback loop" to my brain. I'm happy to sacrifice other potential benefits for that.
Writing by hand definitely is better for a squirrel brain like me. And searching is overrated, most note-taking is write-only and tends to accumulate without anyone ever reading it back. So optimize for memory retention so you can later form better mental associations with the material, rather than how searchable on a computer it is. Notes on a PC are inert, what you want is to integrate their contents into your brain so you can actually do something with it.
If you want to get the most out of a physics lecture, leave the laptop at home. Instead, bring a spiral notebook and some colored pens. Write everything down the prof writes on the blackboard.
It's remarkable how much of the lecture you'll remember. And when you read your handwritten notes, you'll remember the lecture that went with it.
I managed it for 4 years in college, with usually 3 hours of lecture per day. Was it work? Sure. One was busy the whole lecture. But the results were clear.
My favorite pens at the time were the Pilot pens. Today I love the Tul pens.
I have long since scanned them all in. I should post them on my website, just for fun.
I would copy what they were writing. The professors had 9 blackboards available, and they'd use them all during a 55 minute lecture, and then some. Some would write with their left hand while erasing with the right (!). It was a heluva time for me. Never before or since have I learned so much so fast. Blink and I fell behind.
In retrospect, I sorely wished I had set up a cassette recorder and recorded all the audio. It would be a gold mine today, as all those lectures are lost to time.
On the other hand, I had no money to buy cassette tapes at the time.
I never learnt much in those sort of lectures. Eventually I figured out I needed to study the material before the lecture and use the lecture to cement what I had already learnt or occasionally answer questions that had come up.
I also use a notebook and pen constantly as a software engineer - but never when I need any of the properties you just mentioned.
It is absolutely the wrong tool for any long-term information store. Ephemeral think-it-through process notes only. Obsidian for anything that should live longer than the current problem I’m trying to solve.
I used to have a hybrid setup of paper as an offloading tool + Notion as a knowledge management system. However, since moving to Obsidian, I've never felt the need to use paper. Adding a line item in my daily note is much faster and efficient for me
Only to an extent. I write fast enough with pen and paper that my thinking is the bottleneck — which isn't really that fast. I don't need to write down everything I think so it also acts as a filter and processor of those thoughts.
> insanely slow to search what you've written
Compared to digital notes system, sure. But the way I use my notes, I don't usually need a full-text search to find stuff. I remember what I've been working on, I often browse through my notebooks to revisit ideas and most often these notebook notes are kind of a "working copy" where the search relevance is often just for the feature so it only needs to be fast to search for a few days or weeks.
I also do copy many of my notes down in digital format too when there's something worth capturing for long term storage for the sake of searching for example.
> almost impossible to copy or share
100%. Sharing is not a consideration for me, these are my raw pure thoughts and explorations, they don't often make much sense to other people as-is. Sometimes I may take a photo of a UI sketch or something to share if needed but otherwise when I have something to share, I write it based on my notes rather than sharing my notes as they are.
Searching through a physical notebook does take a few moments, but the upside is all of that context you get to drink in during the search.
I typically get through a couple of B5 pages a day, so homing in on the thing I want to look at is a matter of opening the notebook to roughly the right fortnight and then flicking through up to a dozen pages in either direction. A tenth of a second per page is enough to scan for major events - what projects was I working on at the time? Were there any major interruptions to my flow, or changes in direction?
And then, once I've found the right day, seeing the detail of everything I was doing at the time triggers a flood of memories - I remember all of the conversations, decisions, problems, and ideas that were current at the time of the entry in question.
I've tried a number of digital notetaking systems, but none has been able to give me so much context so quickly as a paper notebook.
In the context of this post it's not about preserving or sharing the thoughts. Writing, in this case, is a "thinking tool". Forcing yourself to materialise the thoughts as actual, written, text helps form clear ideas.
that’s so alien to me. for me, writing by hand is frustrating to the point of distraction. if I want to think clearly, my best bet is to do it silently, in my head.
To me, if the problem is too complex, or more likely, if I expect to be distracted by family and chores, "building in my head" is not the best option as it all falls apart and need to build it up from scratch (though admittedly faster than last time).
I use A4 sheets of paper, pens with different colours and fluorescent markers. Nothing beats that for me (and I have a iPad Pro with stylus) and I use emacs with org mode too.
I use an automatic scanner to store the important papers as documentation. Now you can even send those to gemini or google cloud to digitalise it for cheap.
At any given time I have four sheets of paper over my desk that I can see in parallel, but I could have 8 or 10 for complex problems with areas equivalent to engineering or architectural blueprints.
Having said that, I can draw and paint very well as I was interested since childhood and had formal training. Probably is not for everyone, but it is for me.
I believe that is the beauty. It makes you be intentional, your mind slows down at a pace that does a double write to your brain (helping you remember better). If remembering with your brain is the RAM, then writing it down is the ROM.
I use a notebook. It forces me to slow down. I don´t like to cross/erase blocks of text so I actually think about what I'm about to write before writing it. It helps the same way rubber duck debugging does.
They aren’t optimising for speed, sharing, or searchability. Notes of this type aren’t an inadequate database, but a tool for thinking. Note-takers visualize and symbolise their thinking while they work through a problem. Writing and drawing reduce cognitive load and externalise information so it doesn’t have to be held in memory. Notes also create a useful record of that process so you can see how you got to your solution. Notebooks are just a convenient UI for that process.
I took the pen+notebook idea a bit further, I use pen+paper (a sheet of paper).
To me, none of the qualities you listed matter when it comes to sketching ideas, thinking about problems, staying on track while problem solving. When I'm done with the task, I read through my notes once and throw it out.
that's probably the most important thing. it forces you to slow down and think it through and actually absorb it.
Searching what I have written in the past would be useless for me because it wouldn't necessarily make sense. It's more about the physically writing it out.
In comparison it’s much faster to draw, infinitely more portable and much less distracting. I use pen and paper for self notes that are throwaway in nature or exploring ideas. For artefacts that are meant to be shared with others I use a computer
I like it when trying to work things out/design things as I can easily scribble down quick drawings to keep myself right. This is much easier with a pencil and paper than trying to do it on a computer to me.
Division on this topic reminds me of the division between those who prefer light vs heavy development tools. Some enjoy mastering and wielding powerful tools, but others see them as a distraction.
people's brains all work differently and for some of them manually writing help them think better, and apparently your are not in this case. I don't think there is much else to understand.
We don't know if singularities are even possible. Maybe the universe has some crazy repulsive force when atoms or subatomic particles get really really close (closer than in neutron stars, where atoms are femtometers apart).
>Maybe the universe has some crazy repulsive force when atoms or subatomic particles get really really close (closer than in neutron stars, where atoms are femtometers apart).
As I understand it, the surface of a neutron star is an iron shell.
"Current models indicate that matter at the surface of a neutron star is composed of ordinary atomic nuclei crushed into a solid lattice with a sea of electrons flowing through the gaps between them. It is possible that the nuclei at the surface are iron, due to iron's high binding energy per nucleon. It is also possible that heavy elements, such as iron, simply sink beneath the surface, leaving only light nuclei like helium and hydrogen. If the surface temperature exceeds 10^6 kelvins (as in the case of a young pulsar), the surface should be fluid instead of the solid phase that might exist in cooler neutron stars (temperature <10^6 kelvins)."
Reading a little further into the wiki, the depth of atomic matter is controlled by the neutron drip line. Since neutron stars have a maximum mass, this is likely a feature that they all exhibit.
At the beginning of the neutron drip, the pressure in the star from neutrons, electrons, and the total pressure is roughly equal. As the density of the neutron star increases, the nuclei break down, and the neutron pressure of the star becomes dominant. When the density reaches a point where nuclei touch and subsequently merge, they form a fluid of neutrons with a sprinkle of electrons and protons. This transition marks the neutron drip, where the dominant pressure in the neutron star shifts from degenerate electrons to neutrons.
There are no "atoms" in neutron stars. The density and temperature are so high that they are "crushed" and that the electrons and protons form neutrons. The result is tightly packed neutrons, hence the name.
I believe it is theorised that it might be possible to go even one step further to a "quark star" since neutrons are not elementary but made of quarks. No idea what a black hole might look like with no singularity...
You can get this easy by reformulating gravity's effect on spacetime as slowing down the speed of light/causality and putting a natural bound that asymptotically approaches zero. It should agree with GR everywhere except at extremes like black holes.
Looking at gravity as a slowdown of c is appealing because it suggests a computational cost of massive particles. As stuff gets more dense, the clock of the universe must slow down.
Actually, behind the event horizon due to the limitation of light speed which is highest possible information transfer limit in universe, including the weak and strong field particles, these atomic forces that hold protons and neutrons together fail to work as outside so it a tangled mass of quarks and other barionic matter.
GR does not describe the interior topology of black holes, beyond predicting a singularity. Is there a hard boundary with no hair, or is there a [knotted or braided] fluidic attractor system with fluidic turbulence at the boundary?
SQR Superfluid Quantum Relativity seems to suggest that there is no hard event horizon boundary.
I don't understand how any model that lacks descriptions of phase states in BEC superfluids could sufficiently describe the magneto-hydro-thermo-gravito dynamics of a black hole system and things outside of it?
It is unclear whether mass/energy/information is actually drawn into a supermassive or a microscopic black hole; couldn't it be that things are only ever captured into attractor paths that are outside of the event horizon?
Does Hawking radiation disprove that black holes don't absorb mass/energy/information?
We know something which looks exactly like a singularity though does exist - i.e. whatever black holes are, we can observe matching predictions very well.
So if singularities don't exist, then some other weird object must that naively looks like one.
Do they look like singularities? As I understand it, any object that crosses a certain density threshold, to a point where light cannot escape its gravitational pull, is effectively a black-hole (even if the mathematical model for them is more purist, only described by a handful of parameters). I don't think they need to be infinitely dense to explain our observations.
You could say that we do observe a singularity, not in the centre of the black-hole but in its event horizon. But technically that's just an infinity in the maths not a physical singularity, in the sense that if you were there it would just seem like normal space.
This is not actually a density threshold; the more mass you have, the less density you need/get withing the event horizon, meaning bigger black hole => less average mass density ("Schwarzschild density").
And this Schwarzschild density is comparable to the mass density of neutron stars (or atomic nuclei) for small black holes, and can be as low as the density of water (!!) for the supermassive ones.
Right, makes sense, it's all about it causing a sufficiently "steep" space curvature and/or over a long enough distance, to offset the speed of light trying to escape. Of course, high density is not the only way to have that effect. Thanks for the insight!
The no-hair conjecture says that a Kerr-Newman black hole (KN BH: stationary, eternal) has only position, linear momentum, angular momentum, and electromagnetic charge.
3 components of linear momentum and three of position can be removed by keeping the KN BH at the origin of a system of coordinates. The KN BH doesn't evolve with time so we can remove two related time components as well.
This leaves us with 3 components of angular momentum ("spin"), electromagnetic charge (because by definition the KN BH is immersed in an electromagnetic field), and mass.
Schwarzschild BHs are a special case of KN BH where spin and electromagnetic charge vanish.
But we could add other fields with charges, and make those charges more complicated thanks to interactions among the matter fields. That's not a Kerr-Newman black hole any more, though. In practice the other standard model fields don't really make much difference: the charges will tend to neutralize before gravitation is relevant, and won't build up around the BH itself. KN BHs are in that sense electromagnetically quasi-neutral.
Things falling into a KN BH cause a perturbation that decays quickly away, changing the mass, and possibly spin, of the BH held at the coordinate origin. A change in electromagnetic charge will probably "reach out" and capture a charged particle in order to neutralize. Electromagnetic attraction is much stronger than gravitational attraction, while also being long range. However neutralization is not necessarily instant (the closest proton might be several light-minutes away, for example) or completely matched (oops, two protons are electromagnetically nudged into an infalling trajectory in response to the small negative charge, so when the second one arrives there'll be a small positive charge for a bit until a further electron is pulled in...). So quasi-neutral.
These parameters are only about the horizon. It says nothing about whether the mass was a bunch of individual protons and electrons or a bunch of neutral hydrogen or a bunch of heavier molecules. (The same "says nothing" also exists classically without reference to particles: start with a Schwarzschild BH and drop in a uniform spherical shell of mass M or two concentric uniform spherical shells of mass M/2 each, and after some "balding" time we cannot tell whether our increased-mass Schwarzschild BH had one or two shells dropped into it).
More prosaically, "no hair" is a statement about the stability of black holes in the face of small perturbations. If you throw something (a star, a big gravitational wave) into a black hole and wait a bit, does the resulting configuration still look like a black hole in the sense that the trajectories one grinds out of the Kerr-Newman metric (with adjusted mass, spin and charge parameters) accurately represent the trajectories around the new configuration?
There is a lot of literature on black hole stability justifying a "yes" answer.
Note that the average mass density within the event horizon of a black hole is not particularly large; for big black holes it is actually smaller than the density of main sequence stars (!!).
Naively, I would have expected this to provide a good lower bound for "largest possible mass density", but its actually just lower than neutron star density for pretty much all black holes observed so far.
The mean density of the interior is much much much lower than you think, because the Schwarzschild interior volume is enormously larger than that of a closed Euclidean 3-ball of the Schwarzschild radius.
Some details:
Christodoulou & Rovelli 2014 showed that the maximal interior volume of a Schwarzschild black hole increases with time <https://arxiv.org/abs/1411.2854>. There is plenty of follow-on work in the literature for other varieties of black hole, and it is pretty generic that a black hole gets REALLY BIG inside as it gets older.
I have previously aimed HNers at DiNunno & Matzner 2008 <https://arxiv.org/abs/0801.1734> (this is also good teaching material for thinking about different systems of coordinates on a Schwarzschild BH spacetime).
One might get some intuition by thinking about pouring material into a (non-static) black hole. When do you saturate the black hole? Does it ever fill completely up? How does its outer boundary (the horizon) evolve as you pour more and more material in?
You're not ever actually seeing the singularity. The place of INFINITE density. You're seeing the event horizon/curved spacetime around it, at best. Those can also appear around non-singularities.
Well then it is not a singularity either, is it. Feels a bit like a misunderstanding is happening. "Singularity" is a well-defined mathematical concept, not just a cool synonym for "black hole".
That would mean is that what prevents them is some other mechanism doing it accidentally. My view is that they are prevented by the GR itself (by time dilatation).
But how do you classify a question as high vs low complexity? Some seemingly simple questions can turn out to be very very complex. For example, integer solution to
x³ + y³ + z³ = 42
took over a hundred years of compute time to find.
Or another seemingly simple equation with positive integers x,y,z
x/(y+z)+y/(z+x)+z/(x+y) = 4
requires elliptic curve knowledge, and the solution is huge
x = 154476802108746166441951315019919837485664325669565431700026634898253202035277999
y = 36875131794129999827197811565225474825492979968971970996283137471637224634055579
z = 4373612677928697257861252602371390152816537558161613618621437993378423467772036
Query complexity in this context is based on how many tokens it took for the model to respond to a query correctly based on a ground truth dataset like GSM8k. The adaptive classifier learns over this dataset and then we use it at inference for classification.
Yes, if you only care about correctness, you always use the maximum possible inference compute. Everything that does not do that is trading correctness for speed.
Yes, the goal here is to avoid overthinking and be as efficient as possible in terms of the minimal tokens required to solve a query. Often, queries that require too many tokens are unlikely to lead to correct answers anyways otherwise they would show up when we are learning the classifier.
If you ask it to rethink the problem again because you've found a flaw, does it bump up the complexity and actually think about it. Like a person might give you a quick answer to something and then questioning the answer would cause them to think deeper about it.
The short answer is in general yes it helps improve the accuracy, there is a whole line of work on self consistency and critique that supports it. Many of those approaches are already implemented in optillm.
If compute is limited, then dedicating more resources to the questions that are more likely to need it will increase correctness overall, even if it may decrease correctness for some individual responses.
I think there exists a separate skill for classifying problems by difficulty, apart from being able to solve them. This skill can be developed from both directions by learning which problems have been solved and which haven't been.
If someone asked me to find solutions to these example equations, there are three complications that I would immediately notice:
1. We are looking for solutions over integers.
2. There are three variables.
3. The degree of the equation is 3.
Having all three is a deadly combination. If we were looking for solutions over reals or complex numbers? Solvable. Less than three variables? Solvable. Degree less than 3? Solvable. With all three complications, it's still not necessarily hard, but now it might be. We might even be looking at an unsolved problem.
I haven't studied enough number theory to actually solve either of these problems, but I have studied enough to know where to look. And because I know where to look, it only takes me a few seconds to recognize the "this might be very difficult" vibe that both of these have. Maybe LLMs can learn to pick up on similar cues to classify problems as difficult or not so difficult without having needing to solve them. (Or, maybe they have already learned?)
Anecdata: my neighbour runs a doggy day care. She's been flooded with 1 star reviews (clearly not from her customers), and received similar coercive phones calls to help her improve her online presence. Not much she can do about it, as Google is not particularly responsive
Blender dev here.
A majority (like 2/3) of the development nowadays happens through the core development team: it's around 30-40 full-time/part-time paid developers. Most are located in the Netherlands. We make an average salary that's slightly higher than the average in that sector here in our country. All of the money comes exclusivly from the Dev Fund (fund.blender.org). It's all donations, no strings attached. We say "thank you" and (if they wish) put the logo on the website.