Thoughts: I'm probably missing it, but SceneDreamer looks like it generates geometries on the fly as the viewer walks through the city, and CityDreamer looks like it generates a whole real city first and then lets the user walk through it. I'm wrong, right?
https://www.townscapergame.com/ does a great job of this, even if the agglomerations it produces are more cartoony than not.
For me, this is a really good example of a generative approach: basic rules with well-tuned interactions, producing a great range of complex outcomes that are all coherent despite a large variety of actual layouts.
[...stuff about SandSpielStudio and block visual programming languages like Scratch and Snap!, Long Now talk between Will Wright and Brian Eno about Cellular Automata, etc ...]
The other really cool rabbit hole to explore for generating tiles and even arbitrary graph based content (I'm sold: hexagons are the bestagons!) is "Wave Function Collapse", which doesn't actually have anything to do with quantum mechanics (it just sounds cool), but is actually a kind of constraint solver related to sudoku solvers.
There's a way to define cellular automata rules by giving examples of the before and after patterns, and WFC is kind of like a statistical constraint solving version of that.
So it's really easy for artists to define rules just by drawing! Not even requiring any visual programming, but you can layer visual programming on top of it.
That's something that Alexander Repenning's "AgentSheets" supported (among other stuff): you could define cellular automata rules by before-and-after examples, wildcards and variables, and attach additional conditions and actions with a visual programming language.
AgentSheets and other cool systems are described in this classic paper: “A Taxonomy of Simulation Software: A work in progress” from Learning Technology Review by Kurt Schmucker at Apple. It covered many of my favorite systems.
Chaim Gingold wrote a comprehensive "Gadget Background Survey" at HARC, which includes AgentSheets, Alan Kay's favorites: Rockey’s Boots and Robot Odyssey, and Chaim's amazing SimCity Reverse Diagrams and lots of great stuff I’d never seen before:
Chaim Gingold has analyzed the SimCity (classic) code and visually documented how it works, in his beautiful "SimCity Reverse Diagrams":
>SimCity reverse diagrams: Chaim Gingold (2016).
>These reverse diagrams map and translate the rules of a complex simulation program into a form that is more easily digested, embedded, disseminated, and and discussed (Latour 1986).
>The technique is inspired by the game designer Stone Librande’s one page game design documents (Librande 2010). If we merge the reverse diagram with an interactive approach—e.g. Bret Victor’s Nile Visualization (Victor 2013), such diagrams could be used generatively, to describe programs, and interactively, to allow rich introspection and manipulation of software.
>Latour, Bruno (1986). “Visualization and cognition”. In: Knowledge and Society 6 (1986), pp. 1– 40. Librande, Stone (2010). “One-Page Designs”. Game Developers Conference. 2010. Victor, Bret (2013). “Media for Thinking the Unthinkable”. MIT Media Lab, Apr. 4, 2013.
[... stuff about AgentSheets, KidSim, Lex Fridman interviews of Michael Levin and Steven Wolfram discussing Cellular Automata, CAM6 simulator, etc ...]
Does anyone use these generative city modeling compositions that align to a vector base map of actual cities? It would be fun to see these 3D cities as fictional versions of known locations. I could see city planners use it for "what-if" scenarios.
It's a procedural city generation engine first presented at SIGGRAPH 2001, by Procedural Inc., an ETH Zürich spinoff that was later acquired by ESRI (ArcGIS).
At its core is a shape grammar based on L-Systems [2] that is used to define how a road network is subvided into lots, then buildings etc.. with increasing level of detail and contextual constraints.
You give it a vector road network and the grammar rules, and it subdivides it spacially into e.g.
etc., according to the desired level of detail. Context dependent rules control that e.g. doors may only be placed on the ground floor, or the the roof goes on top of the top floor.
The original paper and presentation at SIGGRAPH described the simplicity of the core idea quite beautifully, but I'm having a hard time finding them now. This old tech demo video [3] should give an idea of how it works though.
This is kind of weird tying-together comment, but Townscaper (referenced in btbuildem's comment []) was partly inspired by an (deterministic non-NN) application, which the creator was working on for someone else, that did just that. I can't find the tweet, but if you go back through @OskSta's history, he discusses it.
An awful lot of cities have a "digital twin" of some sort in the planning department and frequently the data is public. It would be pretty neat to be able to work from this material.
I work in the field, and the truth is that an awful lot of cities _claim_ to have a digital twin. When what they actually have us a big pile of random GIS data and some 10 year old 3D models someone made once and never updated.
The images are all tiny, or, in the video, spinning, so you can't easily see how bad they are. It's not that hard to procedurally generate aerial views of cities. Generating a street scene you can walk through is much harder. Generating one where you can go inside the buildings has only been done once that I know of, and there wasn't much interior detail.
That makes sense; if you put Google Earth in, you get Google Earth like imagery out.
Something that took in StreetView and made storefronts would be interesting.
What's impressive about the current Google Earth is that it's good at extracting and texturing vertical buildings. Look at a big city with densely packed tall buildings and look down into an urban canyon. It does a good job, at least until there's some overhang such as a canopy or an underpass.
That kind of processing is now open source. Open Drone Map can take drone images and construct a textured 3D model. It's surprisingly good at this.
What would be fascinting is to combine City Skylines 2 (if it had some kind of API which it doesn't) with a model generator like this. CS2 already has all of the necessary graphic elements and positioning, just not the automated design logic.
For low-level (i.e., trees-not-forest) visuals - e.g., where the user can see a block or two from street level - why not use real cities? Don't identify the place, change the names, remove anything especially identifiable (e.g., Eiffel Tower in background), and nobody will know. Someone who lives on that block might find a cool Easter Egg for themselves.
Procedurally generated environments require very, very little storage space (you essentially need to store a few hundred seeds and parameters) and for most games nowadays, storage is actually the biggest bottleneck of all (not just HDD but temporary buffered storage).
Procedurally generated environments basically approach Kolmogorov-optimal compression for a given domain, without any loss of fidelity when done well. And the generation itself is usually not more computation than the polygon rendering.
Youtube video: https://www.youtube.com/watch?v=te4zinLTYz0
Thoughts: I'm probably missing it, but SceneDreamer looks like it generates geometries on the fly as the viewer walks through the city, and CityDreamer looks like it generates a whole real city first and then lets the user walk through it. I'm wrong, right?