Spacewalk

Spacewalk is an AI-powered architectural design company providing technology to support small-scale projects lacking expert guidance. Our mission is to enable anyone, regardless of expertise, to improve spaces by simply asking the right questions.

Our research team develops AI models leveraging Large Language Model reasoning capabilities. These models can automatically generate optimal designs by understanding and incorporating regional contexts from simple user descriptions. Our product team delivers these models via APIs for enterprise clients and creates user-friendly tools accessible to both everyday users and architectural professionals.

Our solutions apply to office layouts, residential furniture placement, architectural designs, logistics centers, and manufacturing automation. We've successfully deployed our technology in residential projects in Korea and social housing in Vietnam. Additionally, our models automate compliance checks for building codes and guidelines, streamlining the review process.









LLM-based Design AI

Our LLM-powered AI solution enables users to instantly generate tailored layouts by simply describing their needs in natural language. Delivered through an intuitive API, it specializes in office space planning and residential furniture arrangement.

By harnessing architectural reasoning capabilities, the AI easily adapts to spatial contexts, empowering anyone—even those without architectural expertise—to design their own spaces.









Land Optimization Engine

Software as a Service, we are developing and providing and that revolutionize how land is utilized, making architectural insights accessible to everyone.

Our focus is on transforming untapped potential into opportunities by creating scalable, tailored solutions that meet the diverse needs of property owners and organizations.









Recent Posts

Checkpointing for Kubernetes Pods

Checkpointing for Kubernetes Pods

Abstract Workflow systems in Kubernetes (such as Argo, Kubeflow, etc. ) provide checkpointing by saving input/output artifacts at each step, but this mainly focuses on data transfer between pods. The “intermediate state” of long-running tasks inside a pod is not preserved without separate checkpointing, leading to inefficiency as the entire task must be repeated from the beginning if the pod fails or restarts.

A checkpointing strategy for the intermediate state inside pods using external storage like Amazon S3 is necessary. This article explains how to resume work from the last saved point in case of pod failure using S3-based checkpointing, and demonstrates the benefits in terms of time, resource savings, and increased success rate in iterative experiments such as Genetic Algorithms (GA). The Need for Checkpoints in Kubernetes Jobs When performing long-running tasks in a Kubernetes environment (e. g. , large-scale data processing, machine learning training, high-performance geometric simulations, etc. ), it is common for pods to fail or restart due to competition for spot instances, memory shortages, and other reasons. Kubernetes-based workflow systems (e. g. , Argo Workflows, Kubeflow Pipelines, etc. ) already provide robust support for saving the results of each step and passing them to the next step via input/output artifacts. However, this artifact-based storage mainly focuses on data transfer between workflow steps (i. e. , pod units). In other words, the “intermediate state” of long-running tasks inside a pod (e. g. , in-memory iterative calculations, experiment progress, etc. ) is not preserved without additional measures. If a pod fails or restarts in the middle, the ongoing work inside that pod must start over from the beginning. Without any measures, the task will restart from scratch, wasting a lot of resources and time. Especially in retry situations, checkpointing is very useful. If a task fails or is unexpectedly interrupted, without a checkpoint, the entire process must be repeated from the beginning, but with a checkpoint, you can quickly recover from the last saved point. Therefore, fine-grained checkpointing inside the pod is separately required. This article covers how to save and restore checkpoints inside a pod, and the effects thereof. Saving Checkpoints Using S3 There are several ways to save checkpoint data, but in cloud environments, using object storage such as Amazon S3 is the simplest and most scalable. Save intermediate results to S3 at certain stages (e. g. , epoch, generation, etc. ). When the pod restarts, load the most recent checkpoint from S3 and resume work from there. This approach has the following advantages: Regardless of which node the pod runs on, you can always access the same checkpoint via S3 Ensures consistent results even after multiple restarts or retry situations Improves reliability and efficiency of the job Using Checkpoints in GA (Genetic Algorithm) Especially for tasks like Genetic Algorithms (GA), which involve repeated calculations over many generations, saving a checkpoint at the end of each generation is very effective. Even in situations where retry is needed (e. g. , pod failure, network issues, etc. ), if you have a checkpoint, you can immediately resume the experiment from the last saved generation. This greatly reduces wasted time and resources. Information That Must Be Included in a GA Checkpoint Current generation number (generation) population (list or array of parameter sets to be passed to the next generation) best solution (the best parameter set found so far) best fitness (fitness value of the best solution) (Optional) random seed, environment information, etc. for experiment reproducibility All this information must be saved so that, even if the pod dies in the middle, the experiment can be resumed from exactly the same state. Example Code (Python) Below is an example code for saving/loading checkpoints in a GA using S3. All essential information is included. import boto3 import pickle s3 = boto3. client('s3') bucket = 'your-s3-bucket' checkpoint_key = 'checkpoints/ga_generation. pkl' def save_checkpoint(data, key=checkpoint_key): s3. put_object(Bucket=bucket, Key=key, Body=pickle. dumps(data)) def load_checkpoint(key=checkpoint_key): try: obj = s3. get_object(Bucket=bucket, Key=key) return pickle. loads(obj['Body']. read()) except s3. exceptions. NoSuchKey: return None # Usage example generation_to_run = 0 # Generation number to run this time population = None best_solution = None best_fitness = None random_seed = 42 # Example: save seed for reproducibility # Load checkpoint checkpoint = load_checkpoint() if checkpoint: # Start from the next generation after the last successful one generation_to_run = checkpoint['generation'] + 1 population = checkpoint['population'] best_solution = checkpoint['best_solution'] best_fitness = checkpoint['best_fitness'] random_seed = checkpoint. get('random_seed', 42) while generation_to_run < MAX_GENERATION: # . . . GA operations for the current generation_to_run . . . # Update population, best_solution, best_fitness # Save checkpoint after successfully completing the current generation save_checkpoint({ 'generation': generation_to_run, # Successfully completed generation number 'population': population, # Parameter set for the next generation 'best_solution': best_solution, 'best_fitness': best_fitness, 'random_seed': random_seed }) generation_to_run += 1 Note: For complete reproducibility, it is recommended to also save the random seed and environment information. (If running within the same docker image, this is usually not necessary. ) Add to the checkpoint as needed. Time and Success Rate Comparison Before and After Applying Checkpoints As a real example, suppose a GA task takes about 10 seconds per generation and is performed for 50 generations. Comparing situations where the pod fails and the task is interrupted: Time Comparison case 1 - no checkpoint 1 ~ 30 (runs normally for 30 generations, then fails) 1 ~ 25 (runs normally for 25 generations, then fails) 1 ~ 50 (runs normally for 50 generations, finally succeeds) Total generations run: 30 + 25 + 50 = 105 times Total time: 105 × 10 seconds = 1,050 seconds (about 17. 5 minutes) case 2 - checkpoint 1 ~ 30 (runs normally for 30 generations, then fails) 30 ~ 50 (runs normally for 21 generations, finally succeeds) Total generations run: 30 + 21 = 51 times Total time: 51 × 10 seconds = 510 seconds (about 8. 5 minutes) Comparison and Effect Restarting without checkpoint: 1,050 seconds Restarting with checkpoint: 510 seconds You can save more than half the time and resources. Improved Final Success Probability With checkpointing, even if the pod fails multiple times, you can always resume from the last saved point. This not only saves time but also greatly increases the probability of completing the experiment successfully. Even after repeated failures, the probability of completing the entire experiment increases. For example, suppose each generation succeeds with a probability of 98%, and you need to complete all 50 generations within a maximum of 3 pod retries. Let’s compare the situations: case 1 - no checkpoint # Failure probability for each try = 1 - 0. 98 ** 50 >>> 1 - 0. 98 ** 50 0. 6358303199128832 # Probability of failing all three times >>> 0. 6358303199128832 ** 3 0. 2543740234375 # Probability of succeeding at least once in three tries >>> 1 - 0. 2543740234375 0. 7456259765625 So, you have about a 75% chance of final success. case 2 - checkpoint # Probability that a single generation fails all three times >>> 0. 02 ** 3 8e-06 # This means "the probability that a single generation fails in pod 1, 2, and 3. " # In reality, retry is not done per generation, but # in a pod-level retry + checkpoint resume structure, some generations may be attempted up to 3 times. # Probability that all 50 generations succeed within 3 tries each >>> (1 - 8e-06) ** 50 0. 9996000399986667 So, you have about a 99. 96% chance of final success. Summary: Checkpointing is an important strategy that not only saves time and resources in retry situations but also increases the success rate of experiments. Conclusion Checkpointing using S3 in Kubernetes environments, especially for repetitive and long-running tasks (e. g. , GA), greatly reduces resource waste due to pod restarts or retry situations. By saving a checkpoint at each generation, you can efficiently resume work from the interrupted point at any time. In the case of GA, saving all essential information such as population, best solution, best fitness, and random seed is necessary for complete recovery. . .


Quantifying Architectural Design

Quantifying Architectural Design

1.

Measurement: Why It Matters In the context of space and architecture, “measuring and calculating” goes beyond merely writing down dimensions on a drawing. For instance, an architect not only measures the thickness of a building’s walls or the height of each floor, but also observes how long people stay in a given space and in which directions they tend to move. In other words, the act of measuring provides crucial clues for comprehensively understanding a design problem. Of course, not everything in design can be fully grasped by numbers alone. However, as James Vincent emphasizes, the very act of measuring makes us look more deeply into details we might otherwise overlook. For example, the claim “the bigger a city gets, the more innovation it generates” gains clarity once supported by quantitative data. Put differently, measurement in design helps move decision-making from intuition alone to a more clearly grounded basis. 2. The History of Measurement Leonardo da Vinci’s Vitruvian Man Let’s take a look back in time. In ancient Egypt, buildings were constructed based on the length from the elbow to the tip of the middle finger. In China, the introduction of the “chi” (Chinese foot) allowed large-scale projects like the Great Wall to proceed with greater precision. Moving on to the Renaissance, Leonardo da Vinci’s Vitruvian Man highlights the fusion of art and mathematics. Meanwhile, Galileo Galilei famously remarked, “Measure what is measurable, and make measurable what is not,” showing how natural phenomena were increasingly described in exact numerical terms. In the current digital age, we can measure at the nanometer scale, and computers and AI can analyze massive datasets in an instant. If artists of old relied on their finely honed intuition, we can now use 3D printers to produce structures with a margin of error of less than 1 mm. All of this underscores the importance of measurement and calculation in modern urban planning and architecture. 3. The Meaning of Measurement British physicist William Thomson (Lord Kelvin) once stated that if something can be expressed in numbers, it implies a certain level of understanding; otherwise, that understanding is incomplete. In this sense, measurement is both a tool for broadening and deepening our understanding, and also a basis for conferring credibility on the knowledge we gain. Even in architecture, structural safety relies on calculating loads and understanding the physical properties of materials. When aiming for energy efficiency, predicting daylight levels or quantifying noise can guide us to the optimal design solution. In urban studies, as Geoffrey West points out in his book Scale, the link between increasing population and heightened rates of patent filings or innovation is not merely a trend, but a statistically verifiable reality—one that helps guide city design more clearly. However, interpretations can vary drastically depending on what is measured and which metrics one chooses to trust. As demonstrated by the McNamara fallacy, blind belief in the wrong metrics can obscure the very goals we need to achieve. Because architecture and urban design deal with real human lives, they must also consider values not easily reducible to numbers—such as emotional satisfaction or a sense of place. This is why Vaclav Smil cautions that we must understand what the numbers truly represent: we shouldn’t look at numbers alone but also the context behind them. 4. The “Unmeasurable” In design, aesthetic sense is a prime example of what’s difficult to quantify. Beauty, atmosphere, user satisfaction, and social connectedness all have obvious limits to numerical representation. Still, there have been attempts to systematize such aspects—for instance, Birkhoff’s aesthetic measure (M = O/C, where O = order, C = complexity). Some methods also translate survey feedback into ratings or scores. Yet qualitative context and real-world experiences are not easily captured in raw numbers. Informational Aesthetics Measures Design columnist Gillian Tett has warned that over-reliance on numbers can make us miss the bigger context. We need, in other words, to connect “numbers and the human element. ” Architectural theorist K. Michael Hays notes that although the concept of proportion from ancient Pythagorean and Platonic philosophy led to digital and algorithmic design in modern times, we still cannot fully translate architectural experience into numbers. Floor areas or costs might be represented numerically, but explaining why a particular space is laid out in a certain way cannot be boiled down to mere figures. In the end, designers must grapple with both the quantitative and the “not-yet-numerical” aspects to create meaningful spaces. 5. Algorithmic Methods to Measure, Analyze, and Compute Space Lately, there has been a rise in sophisticated approaches that integrate graph theory, simulation, and optimization into architectural and urban design. For example, one might represent a building floor plan as nodes and edges in order to calculate the depth or accessibility between rooms, or use genetic algorithms to automate layouts for hospitals or offices. Researchers also frequently simulate traffic or pedestrian movement at the city scale to seek optimal solutions. By quantifying the performance of a space through algorithms and computation, designers can make more rational and creative choices based on those data points. Spatial Networks Representing building floor plans or urban street networks using nodes and edges allows the evaluation of connectivity between spaces (or intersections). For instance, if you treat doors as edges between rooms, you can apply shortest-path algorithms (like Dijkstra or A*) to assess accessibility among different rooms. Spatial networks Justified Permeability Graph (JPG) To measure the spatial depth of an interior, you can choose a particular space (e. g. , the main entrance) as a root, then lay out a hierarchical justified graph. By checking how many steps it takes to reach each room from that root (i. e. , the room’s depth), one can identify open/central spaces versus isolated spaces. Justified Permeability Graph (JPG) Visibility Graph Analysis (VGA) Divide the space into a fine grid and connect points that are visually intervisible, thus creating a visual connectivity network. Through this, you can predict which spots have the widest fields of view and where people might be most likely to linger. Axial Line Analysis Divide interior or urban spaces into the longest straight lines (axes) along which a person can actually walk, then treat intersections of these axial lines as edges in a graph. Using indices such as Integration or Mean Depth helps you estimate which routes are centrally located and which spaces are more private. Genetic Algorithm (GA) For building layouts or urban land-use planning, multiple requirements (area, adjacency, daylight, etc. ) can be built into a fitness function so that solutions evolve through random mutations and selections, moving closer to an optimal layout over successive generations. Reinforcement Learning (RL) Treat pedestrians or vehicles as agents, and define a reward function for desired objectives (e. g. , fast movement or minimal collisions). The agents then learn the best routes by themselves. This is particularly useful for simulating crowd movement or traffic flow in large buildings or at urban intersections. 6. Using LLM Multimodality for Quantification With the advancement of AI, there is now an effort to have LLMs (Large Language Models) analyze architectural drawings or design images, partly quantifying qualitative aspects. For instance, one can input an architectural design image and compare it to text-based requirements, then assign a “design compatibility” score. There are also pilot programs that compare building plans and automatically determine whether building regulations are met and calculate the window-to-wall ratio. Through such processes, things like circulation distance, daylight area, and facility accessibility can be cataloged, ultimately helping designers decide, “This design seems the most appropriate. ” 7. A Real-World Example: Automating Office Layout Automating Office Layout Automating the design of an office floor plan can be roughly divided into three steps. First, an LLM (Large Language Model) interprets user requirements and translates them into a computer-readable format. Next, a parametric model and optimization algorithms generate dozens or even hundreds of design proposals, which are then evaluated against various metrics (e. g. , accessibility, energy efficiency, or corridor length). Finally, the LLM summarizes and interprets these results in everyday language. In this scenario, an LLM (like GPT-4) that has been trained on a massive text corpus can convert a user’s specification—“We need space for 50 employees, 2 meeting rooms, and ample views from the lobby”—into a script that parametric modeling tools understand or provide guidance on how to modify the code. Zoning diagram for office layout by architect Let’s look at a brief example. In the past, an architect would have to repeatedly sketch bubble diagrams and manually adjust “Which department goes where? Who gets window access first?” But with an LLM, you can directly transform requirements—like “Maximize the lobby view and place departments with frequent collaboration within 5 meters of each other”—into a parametric model. Using basic information such as columns, walls, and window positions, the parametric model employs rectangle decomposition and bin-packing algorithms (among others) to generate various office layouts. Once these layouts are automatically evaluated on metrics such as adjacency, distance between collaborating departments, and the ratio of window usage, the optimization algorithm ranks the solutions with the highest scores. LLM-based office layout generation If humans were to sift through all these data manually—checking average inter-department distances (5 m vs. a target of 3–4 m)—it would get confusing quickly. This is where the LLM comes in again. It can read the optimization results and summarize them in easy-to-understand statements such as: “This design meets the 25% collaboration space requirement, but its increased window use may lead to higher-than-expected energy consumption. ” Some example summaries might include: “The average distance between collaborating departments is 5 meters, which is slightly more than your stated target of 3–4 meters. ” “Energy consumption is estimated to be about 10% lower than the standard for an office of this size. ” If new information comes in, the LLM can use RAG (Retrieval Augmented Generation) to review documents and instruct the parametric model to include additional constraints—like “We need an emergency stairwell” or “We must enhance soundproofing. ” Through continuous interaction between human and AI, designers can explore a wide range of alternatives in a short time and leverage “data-informed intuition” to make final decisions. results of office layout generation Essentially, measurement is a tool for deciding “how to look at space. ” Although determining who occupies which portions of an office with what priorities can be complex, combining LLMs, parametric modeling, and optimization algorithms makes this complexity more manageable. In the end, the designer or decision-maker does not lose freedom; rather, they gain the ability to focus on the broader creative and emotional aspects of a project, freed from labor-intensive repetition. That is the promise of an office layout recommendation system powered by “measurement and calculation. ” 8. Conclusion Ultimately, measurement is a powerful tool for “revealing what was previously unseen. ” Having precise numbers clarifies problems, allowing us to find solutions more objectively than when relying on intuition alone. In the age of artificial intelligence, LLMs and algorithms extend this process of measurement and calculation, making it possible to quantify at least part of what was once considered impossible to measure. As seen in the automated office layout example, even abstract requirements become quantifiable parameters thanks to LLMs, and parametric models plus optimization algorithms generate and evaluate numerous design options, proposing results that are both rational and creative. . .


AI and Architectural Design: Present and Future

AI and Architectural Design: Present and Future

Introduction Artificial intelligence (AI) is making waves across a broad range of fields, from automating repetitive tasks to tackling creative problem-solving. Architecture is no exception.

What used to be simple computer-aided drafting (CAD) has now evolved into generative design and optimization algorithms that are transforming the entire workflow of architects and engineers. This article explores how AI is bringing innovation to architectural design, as well as the opportunities and challenges that design professionals face. Background of AI in Architectural Design Rule-Based Approaches and Shape Grammar Early research on automated design relied on rules predefined by humans. Shape Grammar, which emerged around the 1970s, demonstrated that even a limited set of rules could replicate the style of a particular architect. One famous example was using basic geometric rules to recreate the floor plans of Renaissance architect Palladio’s villas. However, because these systems only operate within the boundaries of predefined rules, tackling highly complex architectural problems remained difficult. . Koning's and Eisenberg's compositional forms for Wright's prairie-style houses. Genetic Algorithms and Deep Reinforcement Learning Inspired by natural evolution, genetic algorithms evaluate random solutions, then breed and mutate the best ones to progressively improve outcomes. Although they are powerful for exploring complex problems, they become time-consuming when many variables are involved. Deep reinforcement learning, popularized by the AlphaGo AI for the game of Go, learns policies that maximize rewards through trial and error. In architecture, designers can use these techniques to create decision-making rules based on context—site shape, regulations, design forms, and more. Generative Design and the Design Process Automating Repetitive Tasks and Proposing Alternatives Generative design automates repetitive tasks and simultaneously produces a wide range of design options. Traditionally, time constraints meant only a few ideas could be tested. With AI, it’s possible to experiment with dozens—or even hundreds—of designs at once, while also evaluating factors such as sunlight, construction cost, functionality, and floor area ratio. The Changing Role of the Architect The architect’s role in an AI-driven environment is to set objectives and constraints for the AI system, then curate the generated results. Rather than merely drafting drawings or performing calculations, the architect configures the AI’s design environment and refines the best solutions by adding aesthetic sense. The Role of Architects in the AI Era In 1966, British architect Cedric Price posed a question: “Technology is the answer… but what was the question?” This statement is more relevant today than ever. The core challenge is not “How do we solve a problem?” but rather “Which problem are we trying to solve?” While advances in AI have largely focused on problem-solving, identifying and defining the right problem is critical in architectural design. Defining the Problem and Constructing a State Space Someone with both domain knowledge and computational thinking is best suited to define these problems. With a precisely formulated problem, even a simple algorithm can suffice for effective results—no need for overly complex AI. In this context, the architect’s role shifts from producing a single, original masterpiece to designing the interactions between elements—that is, shaping the state space. Meanwhile, AI explores and discovers the best combination among countless existing possibilities. Palladio’s Villas and Shape Grammar A classic example is Rudolf Wittkower’s analysis of Renaissance architect Andrea Palladio’s villas. Wittkower uncovered recurring geometric principles, such as the “nine-square grid,” symmetrical layouts, and harmonic proportions (e. g. , 3:4 or 2:3). While Palladio’s villas appear individually unique, they share particular rules under the hood. Researchers later turned these observations into shape grammar, enabling computers to automatically generate Palladio-style floor plans—demonstrating that once hidden rules are clearly extracted, computers can “discover” new variations by applying them systematically. Landbook and LBDeveloper Spacewalk is a company developing AI-powered architectural design tools. In 2018, they released Landbook, an AI-based real-estate development platform that can instantly show property prices, legal development limits, and expected returns. LBDeveloper, on the other hand, is designed for small housing redevelopment projects—automating the process of checking feasibility and profitability. Simply entering an address reveals whether redevelopment is possible and the potential returns. AI-Based Design Workflow LBDeveloper calculates permissible building envelopes based on regulations, then generates building blocks using various grid and axis options. It arranges these blocks, optimizes spacing to avoid collisions, and evaluates efficiency using metrics like the building coverage ratio (BCR) and floor area ratio (FAR). The final step is choosing the best-performing design among the candidates. The total number of combinations is calculated as (Number of Axis Options) × (Number of Grid Options) × (Number of Floor Options) × (Row Shift × Column Shift Options) × (Rotation Options) × (Additional Block Options). For instance, a rough combination could be 4 × 100 × 31 × (10×10) × 37 × (varies by active blocks). In practice, each stage performs its own optimization process rather than brute-forcing all possible combinations. Future Prospects Design inherently entails an infinite variety of solutions, making it impossible to reach a perfect optimum when bound by real-world constraints. Even optimization algorithms struggle to guarantee satisfaction when faced with vast search spaces or strict regulations. Large Language Models (LLMs) offer a way to mitigate these problems by generating new ideas and picking best-fit solutions from a range of options. In fields like architecture, where complicated rules are intertwined, LLMs can be especially powerful. LLMs for Design Evaluation and Decision LLMs can take pre-generated design proposals and evaluate them across various criteria—legal requirements, user demands, visual balance, circulation efficiency, and more. This kind of automated review process is particularly useful when there are numerous candidates. Without AI, an architect must manually inspect each option, but with an LLM’s guidance, it becomes easier to quickly identify viable designs. Moreover, LLMs are prompt-based, so you only need to phrase your requirements in natural language—for instance, “Please check if there’s adequate distance between our building and adjacent properties. ” The LLM then analyzes the condition and summarizes the results. Simultaneously, it considers spatial concepts along with functional requirements, measuring how the proposed design meets the architect’s goals and suggesting improvements. Ultimately, the LLM allows designers to focus on more profound aspects of design. Architects can rapidly verify whether a concept is feasible, whether regulations and practical constraints are satisfied, and whether the design is aesthetically sound. By taking care of details that might otherwise go unnoticed, AI encourages broader design exploration. We can expect such partnerships between human experts and LLMs—where the AI evaluates and narrows down countless possibilities—to become increasingly common. . .


Rendering Multiple Geometries

Rendering Multiple Geometries

Introduction In this article, we will introduce methods to optimize and reduce rendering load when rendering multiple geometries.

Abstract 1. The most basic way to create an object corresponding to a shape in three. js is to create a single mesh through geometry and material corresponding to the mesh. 2. At this time, due to the excessive number of draw calls, it was too slow, so we attempted the first optimization by merging geometries by material to reduce the number of meshes. 3. Additionally, for geometries that can be generated through transformations such as move, rotate, and scale from a reference geometry, we attempted to improve memory usage and reduce load during geometry creation by using instancedMesh instead of merging. 1. Basic Mesh Creation Let's assume we are rendering a single apartment. There are many types of shapes that form an apartment, but here we will use a block as an example. When rendering a single block as shown below, there is no need for much consideration. . . . const [geometry, material] = [new THREE. BoxGeometry(20, 10, 3), new THREE. MeshBasicMaterial({ color: 0x808080, transparent: true, opacity: 0. 2 })]; const cube = new THREE. Mesh(geometry, material); scene. add(cube); . . . Draw Calls: 0 Mesh Creation Time: 0 However, when rendering an entire apartment, the geometry of a single unit, such as windows and walls, often exceeds 100, which means that if you want to render 100 units, the number of geometries can easily exceed five digits. This is the case when creating one mesh per geometry for rendering. Below is the result of rendering 10,000 shapes. Each geometry was created by cloning and translating a base shape. // The code below is executed 5,000 times. The number of meshes added to the scene is 10,000. . . . const geometry_1 = base_geometry_1. clone(). translate(i * x_interval, j * y_interval, k * z_interval); const geometry_2 = base_geometry_2. clone(). translate(i * x_interval, j * y_interval, k * z_interval + 3); const cube_1 = new THREE. Mesh(geometry_1, material_1) ; const cube_2 = new THREE. Mesh(geometry_2, material_2); scene. add(cube_1); scene. add(cube_2); . . . Draw Calls: 0 Mesh Creation Time: 0 2. Merge Geometries Now, I will talk about one of the commonly used methods for rendering optimization, which is reducing the number of meshes. In graphics, there is a concept called draw call. The CPU finds the shapes to be rendered in the scene and requests the GPU to render them, and the number of these requests is called a draw call. This can be understood as the number of requests to render different meshes with different materials. Here, the CPU, which is not specialized in multitasking, may experience bottlenecks as it has to handle many calls simultaneously. Source: https://joong-sunny. github. io/graphics/graphics/#%EF%B8%8Fdrawcall By manipulating the scene with the mouse, you can check the difference in draw calls between this case and the previous case. In the above case, due to reasons such as the max distance of the camera set or shapes going out of the screen, it is not always called 10,000 times, but in most cases, a significant amount of calls occur. Here, since two materials were used, we used a method to merge geometries. Therefore, as you can see, the draw call here is fixed at a maximum of 2. // The code below is executed once. The number of meshes added to the scene is 2. . . . const mesh_1 = new THREE. Mesh(BufferGeometryUtils. mergeBufferGeometries(all_geometries_1), material); const mesh_2 = new THREE. Mesh(BufferGeometryUtils. mergeBufferGeometries(all_geometries_2), material_2); scene. add(mesh_1); scene. add(mesh_2); . . . Draw Calls: 0 Mesh Creation Time: 0 You can directly confirm that the rendering load has been improved. 3. InstancedMesh Above, we confirmed a method to reduce the load from the perspective of draw calls by merging geometries. These shapes are composed only of shapes using clone and translate, so the base form is the same. This also occurs when rendering objects such as buildings or trees. For example, when there is no need for different forms per floor, or when parts are copied to form the walls of a floor, or when a single type of tree is used with only size or direction changes. It can also be applied when the same form of plane is generated into multiple buildings. In this case, you can further optimize by using instancedMesh, which is efficient in terms of memory and time, reducing the number of geometry object creations. There is a concept called Instancing in graphics. Instancing allows you to render similar geometries multiple times by sending the data to the GPU once and additionally sending the transformation information of each instance to the GPU, which means you only need to create the geometry once. Game engines like Unity also achieve performance optimization using this concept through a feature called GPU instancing. Rendering multiple similar shapes (Unity GPU Instancing) Source: https://unity3d. college/2017/04/25/unity-gpu-instancing/ three. js also has a feature called InstancedMesh that corresponds to this. The draw call is the same as the geometries merge case mentioned above, but there is a significant advantage in terms of memory and rendering time as there is no need to create each geometry separately. The diagram below simplifies the meshes sent to the GPU in processes 1, 2, and 3. Although only translation was used in the example below, you can also use transformations such as rotate and scale. In addition to reducing the creation of separate meshes, you can also reduce the creation of separate geometries, confirming a significant difference in creation time. // You need to add the number of shapes to be used as the last argument of instancedMesh. const mesh_1 = new THREE. InstancedMesh(base_geometry_1, material_1, x_range * y_range * z_range); const mesh_2 = new THREE. InstancedMesh(base_geometry_2, material_2, x_range * y_range * z_range); let current_total_index = 0; for (let i = 0; i Draw Calls: 0 Mesh Creation Time: 0 Of course, transformations such as translate (move), rotate (rotation), and scale (size transformation) cannot cover all cases, so in many cases, the merged geometry method must also be mixed and applied. Conclusion There are various methods to optimize three. js rendering, but in this article, we focused on methods to reduce the rendering burden according to the number of geometries. In addition, I will share my experiences of fixing load issues caused by memory leaks or other reasons in the future. Thank you. . .


Zone Subdivision With LLM - Expanded Self Feedback Cycle

Zone Subdivision With LLM - Expanded Self Feedback Cycle

Introduction In this post, we explore the use of Large Language Models (LLMs) in a feedback loop to enhance the quality of results through iterative cycles.

The goal is to improve the initial intuitive results provided by LLMs by engaging in a cycle of feedback and optimization, extending beyond the internal feedback mechanisms of LLMs. Concept LLMs can improve results through self-feedback, a widely utilized feature. However, relying solely on a single user request for complete results is challenging. By re-requesting and exchanging feedback on initial results, we aim to enhance the quality of responses. This process involves not only internal feedback within the LLM but also feedback in a larger cycle that includes algorithms, continuously improving results. Self-feedback outside LLM API In This Work: Self-feedback inside LLM API examples: Significance of Using LLMs: Acts as a bridge between vague user needs and specific algorithmic criteria. Understands cycle results and reflects them in subsequent cycles to improve responses. Premises Two Main Components: LLM: Used for intuitive selection. Bridges the gap between the user’s relatively vague intentions and the optimizer’s specific criteria. Heuristic Optimizer (GA): Used for relatively specific optimization. Use Cycles: By repeating cycle execution, we aim to create a structure that approaches the intended results independently. Details Parameter Conversion: Convert intuitive selections into clear parameter inputs for algorithm input using LLM and structured output. Adjust the number of additional subdivisions based on zone grid and grid size. number_additional_subdivision_x: int number_additional_subdivision_y: int Prioritize placement close to boundaries for each use based on prompts. place_rest_close_to_boundary: bool place_office_close_to_boundary: bool place_lobby_close_to_boundary: bool place_coworking_close_to_boundary: bool Ask about the desired percentage of mixture for different uses in adjacent patches. percentage_of_mixture: number Inquire about the percentage each use should occupy. office: number coworking: number lobby: number rest: number Optimization Based on LLM Responses: Use LLM responses as optimization criteria, transforming user thoughts into specific criteria. Incorporate Optimization Results into Next Cycle: Insert optimization results as references when asking LLMs in the next cycle, reinforcing the significance of multiple [LLM answer - optimize results with the answer] cycles. Cycle Structure: One Cycle: First [LLM answer - optimize results with the answer]: Ask LLM about subdivision criteria based on the prompt before optimizing zone usage. Second [LLM answer - optimize results with the answer]: Request intuitive answers from LLM regarding zone configuration based on the prompt. After the Second Cycle: After the second cycle, directly request improvement in responses by providing the actual optimization results along with the previous LLM responses. Test Results Case 1 Prompt: In a large space, the office spaces are gathered in the center for a common goal. I want to place the other spaces close to the boundary. GIFs: Case 2 Prompt: The goal is to have teams of about 5 people, working in silos. Therefore, we want mixed spaces where no single use is concentrated. GIFs: Case 3 Prompt: Prioritize placing the office close to the boundary to easily receive sunlight, and place other spaces inside. GIFs: Conclusion This work aims to improve final results by expanding the scope of Self Feedback beyond LLMs to include LLM requests, optimization, and post-processing. As cycles repeat, results approach the intended outcomes, reducing the potential incompleteness of initial values relied upon by LLMs. . .