Team Spacewalk

At Spacewalk, we leverage AI-powered architectural design technology to address information inequality and maximize the utilization of land—humanity's finite resource. We empower people to make informed decisions and rational judgments, driving innovation in architectural design and transforming the future of real estate development.

Through our AI technology, we provide optimal solutions that create enhanced value for numerous small-scale properties that have been excluded from professional expertise and support. By democratizing access to architectural expertise, we ensure every property owner can realize their land's full potential.

From land development novices to experts and major public institutions' real estate policies, Spacewalk's technology continues to evolve so that everyone can create better land value.









SaaS Products

Software as a Service, we are developing and providing and that revolutionize how land is utilized, making architectural insights accessible to everyone.

Our focus is on transforming untapped potential into opportunities by creating scalable, tailored solutions that meet the diverse needs of property owners and organizations.









Recent Posts

Finding Differentiable Environments

Finding Differentiable Environments

Abstract When implementing architectural algorithms or parametric design, it’s easy to think of them as algorithms that produce results based on parameters. However, in heuristic optimizers like GA or ES, the algorithm itself becomes an environment that changes parameters, Furthermore, when used as a criterion for model learning, and in the case of reinforcement learning, the environment becomes the standard itself for judging the model’s directionality. As part of exploring what characteristics the environment should have, we investigated the significance of differentiable environments and tested the results.

We set up two environments: one with definite gradient cutoff and another without it, And by comparing gradient descent, AdamW, and GA methods, we were able to confirm the significance of differentiable environments. Problem Creating an environment for architectural algorithms - or parametric design environments - intuitively takes the following form: Generate a circle with position and radius as parameters. Loss is defined as one-tenth of the difference between the circle's area and the area of a circle with radius 5 (25). When the logic is simple like this, there’s usually not much disconnect between parameters and results. The radius-loss graph can be expressed as follows (for x >= 0): However, as algorithms become more complex, this environment becomes a black box, and even the developers find it difficult to understand the tendencies of results based on parameters. This is where the problem arises. Even without considering reinforcement learning models, optimization processes like GA also need to “measure” results based on parameter changes. However, the randomness of parameter differences’ impact on the environment is too high, making the results based on parameter changes excessively random. The above figure shows the problem even with just one parameter. However, in architectural algorithm engines, it’s almost impossible to use just one or two parameters. This means that not only smooth curves showing tendencies are difficult to identify, but optimization modules can hardly grasp the tendencies. There will be no continuous engine without continuous [parameter - loss]. Therefore, we started by consciously trying to make the score (or negative loss) that the environment creates with parameters and their corresponding environment differentiable when adding any environment functions and parameters. Premises Including parametric design algorithms, the environment mentioned in this post refers to a module that can calculate geometry-related data and loss from input parameters. To explicitly ensure differentiability or parameter connectivity, all processes from starting parameters to loss calculation are performed only through torch tensor operations. Environment Details 1. Parameter and Loss Definition for Environment Parameter Definition - Total 16 parameters Environment 1 - x, y position ratio for 4 rectangles (x_ratio, y_ratio) corresponds to each rectangle’s width and height, ensuring one-to-one correspondence with (x, y) position. Environment 2 - interpolation, offset ratio for 4 rectangles offset operates based on 0. 5 of the corresponding width or height, ensuring one-to-one correspondence with (x, y) position generated through this. In other words, there is definite gradient cutoff. Using 0. 5 for dividing 4 quadrants in Environment 2 - Common - x, y size ratio for 4 rectangles - After position is determined, x_size and y_size are determined through ratio from remaining possible distance. - In other words, both environments look at the same search space where for every point (x, y) ∈ ℝ² in 2D plane, there exists a one-to-one corresponding parameter to generate it. Loss Definition (Each loss is multiplied by a coefficient and added to the final loss) L1: Variance of each rectangle’s area (to make each rectangle have similar areas) $$ \displaystyle L1 = \frac{1}{n} \sum_{i=1}^{n} (Area(Rectangle_i) - \mu)^2 $$ L2: Sum of overlapping areas between rectangles (to prevent rectangles from overlapping) $$ \displaystyle L2 = \sum_{i=1}^{n} \sum_{j=i+1}^{n} Area(Rectangle_i \cap Rectangle_j) $$ L3: Aspect ratio of each rectangle (deviation from 1. 5) $$ \displaystyle L3 = \sum_{i=1}^{n} \left| \frac{\max(w_i, h_i)}{\min(w_i, h_i)} - 1. 5 \right| $$ L4: Sum of areas of all rectangles (area should be large) $$ \displaystyle L4 = \sum_{i=1}^{n} Area(Rectangle_i) $$ Checking Differentiability through Primitive Gradient Descent # This is not an optimizer that modifies the learning model's weights, # but rather differentiable programming that directly updates the parameters used for generating results. optimizer = torch. optim. SGD([parameters], lr=learning_rate) sigmoid was used to keep parameters in 0 ~ 1 range. We tested both environments using a simple method of reflecting backpropagated gradients to parameters. Environment 1 showed more consistent tendencies, had smaller final loss values, and produced actual shape arrangements closer to the intended results. (However, since the Loss is not complete but rather for understanding test tendencies, we’ll focus more on the Loss numbers themselves rather than the actual shape arrangement images. ) In other words, while both environments look at the same search space, Environment 1 can be considered more differentiable. - Environment 1 Loss 1 Loss 2 Loss 3 Loss 4 - Environment 2 Loss 1 Loss 2 Loss 3 Loss 4 Now we’ll proceed with optimization using GA module and Adam optimizer with these two environments. 1. Optimization Test Results using AdamW Optimizer optimizer = torch. optim. AdamW([parameters], lr=learning_rate) - Environment 1 Loss 1 Loss 2 Loss 3 Loss 4 - Environment 2 Loss 1 Loss 2 Loss 3 Loss 4 2. Optimization Test Results using Genetic Algorithm In GA, the final results are similar. With sufficient computation, since the search space itself is the same, we can confirm that they converge to almost identical scores. However, it’s often impossible to provide sufficient computation every time. Moving quickly towards the answer is also important. We can confirm that Environment 1 definitely has better initial stability. Difference in initial graphs between two environments Moreover, in GA we used a Population of 100 per generation. In other words, this is the result of performing 20,000 computations, unlike the previous two cases which only performed 200 computations. While using GA might be one way to guarantee final performance, it’s difficult to consider it an efficient method at least among these three. (Only 200 computations in 2 generations) - Environment 1 Loss 1 Loss 2 Loss 3 Loss 4 - Environment 2 Loss 1 Loss 2 Loss 3 Loss 4 Conclusion We confirmed that differentiable environments, where changes in parameters lead to continuous changes in the environment or results, can achieve better results in the optimization process. This tendency was observed not only in simple gradient descent or AdamW, but even in optimization algorithms like GA. While it may not be possible to apply this to all parameters, we reaffirmed that it’s meaningful to consciously try to add parameters in a differentiable form to the environment when possible during development. To achieve similar scores, GA required many more execution counts. This suggests there might be more efficient methods than GA. We expect to be able to reduce the total number of computations through differentiable environments and methods or other approaches. Let’s Use Differentiable Environment! . .


Quantifying Architectural Design

Quantifying Architectural Design

1.

Measurement: Why It Matters In the context of space and architecture, “measuring and calculating” goes beyond merely writing down dimensions on a drawing. For instance, an architect not only measures the thickness of a building’s walls or the height of each floor, but also observes how long people stay in a given space and in which directions they tend to move. In other words, the act of measuring provides crucial clues for comprehensively understanding a design problem. Of course, not everything in design can be fully grasped by numbers alone. However, as James Vincent emphasizes, the very act of measuring makes us look more deeply into details we might otherwise overlook. For example, the claim “the bigger a city gets, the more innovation it generates” gains clarity once supported by quantitative data. Put differently, measurement in design helps move decision-making from intuition alone to a more clearly grounded basis. 2. The History of Measurement Leonardo da Vinci’s Vitruvian Man Let’s take a look back in time. In ancient Egypt, buildings were constructed based on the length from the elbow to the tip of the middle finger. In China, the introduction of the “chi” (Chinese foot) allowed large-scale projects like the Great Wall to proceed with greater precision. Moving on to the Renaissance, Leonardo da Vinci’s Vitruvian Man highlights the fusion of art and mathematics. Meanwhile, Galileo Galilei famously remarked, “Measure what is measurable, and make measurable what is not,” showing how natural phenomena were increasingly described in exact numerical terms. In the current digital age, we can measure at the nanometer scale, and computers and AI can analyze massive datasets in an instant. If artists of old relied on their finely honed intuition, we can now use 3D printers to produce structures with a margin of error of less than 1 mm. All of this underscores the importance of measurement and calculation in modern urban planning and architecture. 3. The Meaning of Measurement British physicist William Thomson (Lord Kelvin) once stated that if something can be expressed in numbers, it implies a certain level of understanding; otherwise, that understanding is incomplete. In this sense, measurement is both a tool for broadening and deepening our understanding, and also a basis for conferring credibility on the knowledge we gain. Even in architecture, structural safety relies on calculating loads and understanding the physical properties of materials. When aiming for energy efficiency, predicting daylight levels or quantifying noise can guide us to the optimal design solution. In urban studies, as Geoffrey West points out in his book Scale, the link between increasing population and heightened rates of patent filings or innovation is not merely a trend, but a statistically verifiable reality—one that helps guide city design more clearly. However, interpretations can vary drastically depending on what is measured and which metrics one chooses to trust. As demonstrated by the McNamara fallacy, blind belief in the wrong metrics can obscure the very goals we need to achieve. Because architecture and urban design deal with real human lives, they must also consider values not easily reducible to numbers—such as emotional satisfaction or a sense of place. This is why Vaclav Smil cautions that we must understand what the numbers truly represent: we shouldn’t look at numbers alone but also the context behind them. 4. The “Unmeasurable” In design, aesthetic sense is a prime example of what’s difficult to quantify. Beauty, atmosphere, user satisfaction, and social connectedness all have obvious limits to numerical representation. Still, there have been attempts to systematize such aspects—for instance, Birkhoff’s aesthetic measure (M = O/C, where O = order, C = complexity). Some methods also translate survey feedback into ratings or scores. Yet qualitative context and real-world experiences are not easily captured in raw numbers. Informational Aesthetics Measures Design columnist Gillian Tett has warned that over-reliance on numbers can make us miss the bigger context. We need, in other words, to connect “numbers and the human element. ” Architectural theorist K. Michael Hays notes that although the concept of proportion from ancient Pythagorean and Platonic philosophy led to digital and algorithmic design in modern times, we still cannot fully translate architectural experience into numbers. Floor areas or costs might be represented numerically, but explaining why a particular space is laid out in a certain way cannot be boiled down to mere figures. In the end, designers must grapple with both the quantitative and the “not-yet-numerical” aspects to create meaningful spaces. 5. Algorithmic Methods to Measure, Analyze, and Compute Space Lately, there has been a rise in sophisticated approaches that integrate graph theory, simulation, and optimization into architectural and urban design. For example, one might represent a building floor plan as nodes and edges in order to calculate the depth or accessibility between rooms, or use genetic algorithms to automate layouts for hospitals or offices. Researchers also frequently simulate traffic or pedestrian movement at the city scale to seek optimal solutions. By quantifying the performance of a space through algorithms and computation, designers can make more rational and creative choices based on those data points. Spatial Networks Representing building floor plans or urban street networks using nodes and edges allows the evaluation of connectivity between spaces (or intersections). For instance, if you treat doors as edges between rooms, you can apply shortest-path algorithms (like Dijkstra or A*) to assess accessibility among different rooms. Spatial networks Justified Permeability Graph (JPG) To measure the spatial depth of an interior, you can choose a particular space (e. g. , the main entrance) as a root, then lay out a hierarchical justified graph. By checking how many steps it takes to reach each room from that root (i. e. , the room’s depth), one can identify open/central spaces versus isolated spaces. Justified Permeability Graph (JPG) Visibility Graph Analysis (VGA) Divide the space into a fine grid and connect points that are visually intervisible, thus creating a visual connectivity network. Through this, you can predict which spots have the widest fields of view and where people might be most likely to linger. Axial Line Analysis Divide interior or urban spaces into the longest straight lines (axes) along which a person can actually walk, then treat intersections of these axial lines as edges in a graph. Using indices such as Integration or Mean Depth helps you estimate which routes are centrally located and which spaces are more private. Genetic Algorithm (GA) For building layouts or urban land-use planning, multiple requirements (area, adjacency, daylight, etc. ) can be built into a fitness function so that solutions evolve through random mutations and selections, moving closer to an optimal layout over successive generations. Reinforcement Learning (RL) Treat pedestrians or vehicles as agents, and define a reward function for desired objectives (e. g. , fast movement or minimal collisions). The agents then learn the best routes by themselves. This is particularly useful for simulating crowd movement or traffic flow in large buildings or at urban intersections. 6. Using LLM Multimodality for Quantification With the advancement of AI, there is now an effort to have LLMs (Large Language Models) analyze architectural drawings or design images, partly quantifying qualitative aspects. For instance, one can input an architectural design image and compare it to text-based requirements, then assign a “design compatibility” score. There are also pilot programs that compare building plans and automatically determine whether building regulations are met and calculate the window-to-wall ratio. Through such processes, things like circulation distance, daylight area, and facility accessibility can be cataloged, ultimately helping designers decide, “This design seems the most appropriate. ” 7. A Real-World Example: Automating Office Layout Automating Office Layout Automating the design of an office floor plan can be roughly divided into three steps. First, an LLM (Large Language Model) interprets user requirements and translates them into a computer-readable format. Next, a parametric model and optimization algorithms generate dozens or even hundreds of design proposals, which are then evaluated against various metrics (e. g. , accessibility, energy efficiency, or corridor length). Finally, the LLM summarizes and interprets these results in everyday language. In this scenario, an LLM (like GPT-4) that has been trained on a massive text corpus can convert a user’s specification—“We need space for 50 employees, 2 meeting rooms, and ample views from the lobby”—into a script that parametric modeling tools understand or provide guidance on how to modify the code. Zoning diagram for office layout by architect Let’s look at a brief example. In the past, an architect would have to repeatedly sketch bubble diagrams and manually adjust “Which department goes where? Who gets window access first?” But with an LLM, you can directly transform requirements—like “Maximize the lobby view and place departments with frequent collaboration within 5 meters of each other”—into a parametric model. Using basic information such as columns, walls, and window positions, the parametric model employs rectangle decomposition and bin-packing algorithms (among others) to generate various office layouts. Once these layouts are automatically evaluated on metrics such as adjacency, distance between collaborating departments, and the ratio of window usage, the optimization algorithm ranks the solutions with the highest scores. LLM-based office layout generation If humans were to sift through all these data manually—checking average inter-department distances (5 m vs. a target of 3–4 m)—it would get confusing quickly. This is where the LLM comes in again. It can read the optimization results and summarize them in easy-to-understand statements such as: “This design meets the 25% collaboration space requirement, but its increased window use may lead to higher-than-expected energy consumption. ” Some example summaries might include: “The average distance between collaborating departments is 5 meters, which is slightly more than your stated target of 3–4 meters. ” “Energy consumption is estimated to be about 10% lower than the standard for an office of this size. ” If new information comes in, the LLM can use RAG (Retrieval Augmented Generation) to review documents and instruct the parametric model to include additional constraints—like “We need an emergency stairwell” or “We must enhance soundproofing. ” Through continuous interaction between human and AI, designers can explore a wide range of alternatives in a short time and leverage “data-informed intuition” to make final decisions. results of office layout generation Essentially, measurement is a tool for deciding “how to look at space. ” Although determining who occupies which portions of an office with what priorities can be complex, combining LLMs, parametric modeling, and optimization algorithms makes this complexity more manageable. In the end, the designer or decision-maker does not lose freedom; rather, they gain the ability to focus on the broader creative and emotional aspects of a project, freed from labor-intensive repetition. That is the promise of an office layout recommendation system powered by “measurement and calculation. ” 8. Conclusion Ultimately, measurement is a powerful tool for “revealing what was previously unseen. ” Having precise numbers clarifies problems, allowing us to find solutions more objectively than when relying on intuition alone. In the age of artificial intelligence, LLMs and algorithms extend this process of measurement and calculation, making it possible to quantify at least part of what was once considered impossible to measure. As seen in the automated office layout example, even abstract requirements become quantifiable parameters thanks to LLMs, and parametric models plus optimization algorithms generate and evaluate numerous design options, proposing results that are both rational and creative. . .


AI and Architectural Design: Present and Future

AI and Architectural Design: Present and Future

Introduction Artificial intelligence (AI) is making waves across a broad range of fields, from automating repetitive tasks to tackling creative problem-solving. Architecture is no exception.

What used to be simple computer-aided drafting (CAD) has now evolved into generative design and optimization algorithms that are transforming the entire workflow of architects and engineers. This article explores how AI is bringing innovation to architectural design, as well as the opportunities and challenges that design professionals face. Background of AI in Architectural Design Rule-Based Approaches and Shape Grammar Early research on automated design relied on rules predefined by humans. Shape Grammar, which emerged around the 1970s, demonstrated that even a limited set of rules could replicate the style of a particular architect. One famous example was using basic geometric rules to recreate the floor plans of Renaissance architect Palladio’s villas. However, because these systems only operate within the boundaries of predefined rules, tackling highly complex architectural problems remained difficult. . Koning's and Eisenberg's compositional forms for Wright's prairie-style houses. Genetic Algorithms and Deep Reinforcement Learning Inspired by natural evolution, genetic algorithms evaluate random solutions, then breed and mutate the best ones to progressively improve outcomes. Although they are powerful for exploring complex problems, they become time-consuming when many variables are involved. Deep reinforcement learning, popularized by the AlphaGo AI for the game of Go, learns policies that maximize rewards through trial and error. In architecture, designers can use these techniques to create decision-making rules based on context—site shape, regulations, design forms, and more. Generative Design and the Design Process Automating Repetitive Tasks and Proposing Alternatives Generative design automates repetitive tasks and simultaneously produces a wide range of design options. Traditionally, time constraints meant only a few ideas could be tested. With AI, it’s possible to experiment with dozens—or even hundreds—of designs at once, while also evaluating factors such as sunlight, construction cost, functionality, and floor area ratio. The Changing Role of the Architect The architect’s role in an AI-driven environment is to set objectives and constraints for the AI system, then curate the generated results. Rather than merely drafting drawings or performing calculations, the architect configures the AI’s design environment and refines the best solutions by adding aesthetic sense. The Role of Architects in the AI Era In 1966, British architect Cedric Price posed a question: “Technology is the answer… but what was the question?” This statement is more relevant today than ever. The core challenge is not “How do we solve a problem?” but rather “Which problem are we trying to solve?” While advances in AI have largely focused on problem-solving, identifying and defining the right problem is critical in architectural design. Defining the Problem and Constructing a State Space Someone with both domain knowledge and computational thinking is best suited to define these problems. With a precisely formulated problem, even a simple algorithm can suffice for effective results—no need for overly complex AI. In this context, the architect’s role shifts from producing a single, original masterpiece to designing the interactions between elements—that is, shaping the state space. Meanwhile, AI explores and discovers the best combination among countless existing possibilities. Palladio’s Villas and Shape Grammar A classic example is Rudolf Wittkower’s analysis of Renaissance architect Andrea Palladio’s villas. Wittkower uncovered recurring geometric principles, such as the “nine-square grid,” symmetrical layouts, and harmonic proportions (e. g. , 3:4 or 2:3). While Palladio’s villas appear individually unique, they share particular rules under the hood. Researchers later turned these observations into shape grammar, enabling computers to automatically generate Palladio-style floor plans—demonstrating that once hidden rules are clearly extracted, computers can “discover” new variations by applying them systematically. Landbook and LBDeveloper Spacewalk is a company developing AI-powered architectural design tools. In 2018, they released Landbook, an AI-based real-estate development platform that can instantly show property prices, legal development limits, and expected returns. LBDeveloper, on the other hand, is designed for small housing redevelopment projects—automating the process of checking feasibility and profitability. Simply entering an address reveals whether redevelopment is possible and the potential returns. AI-Based Design Workflow LBDeveloper calculates permissible building envelopes based on regulations, then generates building blocks using various grid and axis options. It arranges these blocks, optimizes spacing to avoid collisions, and evaluates efficiency using metrics like the building coverage ratio (BCR) and floor area ratio (FAR). The final step is choosing the best-performing design among the candidates. The total number of combinations is calculated as (Number of Axis Options) × (Number of Grid Options) × (Number of Floor Options) × (Row Shift × Column Shift Options) × (Rotation Options) × (Additional Block Options). For instance, a rough combination could be 4 × 100 × 31 × (10×10) × 37 × (varies by active blocks). In practice, each stage performs its own optimization process rather than brute-forcing all possible combinations. Future Prospects Design inherently entails an infinite variety of solutions, making it impossible to reach a perfect optimum when bound by real-world constraints. Even optimization algorithms struggle to guarantee satisfaction when faced with vast search spaces or strict regulations. Large Language Models (LLMs) offer a way to mitigate these problems by generating new ideas and picking best-fit solutions from a range of options. In fields like architecture, where complicated rules are intertwined, LLMs can be especially powerful. LLMs for Design Evaluation and Decision LLMs can take pre-generated design proposals and evaluate them across various criteria—legal requirements, user demands, visual balance, circulation efficiency, and more. This kind of automated review process is particularly useful when there are numerous candidates. Without AI, an architect must manually inspect each option, but with an LLM’s guidance, it becomes easier to quickly identify viable designs. Moreover, LLMs are prompt-based, so you only need to phrase your requirements in natural language—for instance, “Please check if there’s adequate distance between our building and adjacent properties. ” The LLM then analyzes the condition and summarizes the results. Simultaneously, it considers spatial concepts along with functional requirements, measuring how the proposed design meets the architect’s goals and suggesting improvements. Ultimately, the LLM allows designers to focus on more profound aspects of design. Architects can rapidly verify whether a concept is feasible, whether regulations and practical constraints are satisfied, and whether the design is aesthetically sound. By taking care of details that might otherwise go unnoticed, AI encourages broader design exploration. We can expect such partnerships between human experts and LLMs—where the AI evaluates and narrows down countless possibilities—to become increasingly common. . .


Rendering Multiple Geometries

Rendering Multiple Geometries

Introduction In this article, we will introduce methods to optimize and reduce rendering load when rendering multiple geometries.

Abstract 1. The most basic way to create an object corresponding to a shape in three. js is to create a single mesh through geometry and material corresponding to the mesh. 2. At this time, due to the excessive number of draw calls, it was too slow, so we attempted the first optimization by merging geometries by material to reduce the number of meshes. 3. Additionally, for geometries that can be generated through transformations such as move, rotate, and scale from a reference geometry, we attempted to improve memory usage and reduce load during geometry creation by using instancedMesh instead of merging. 1. Basic Mesh Creation Let's assume we are rendering a single apartment. There are many types of shapes that form an apartment, but here we will use a block as an example. When rendering a single block as shown below, there is no need for much consideration. . . . const [geometry, material] = [new THREE. BoxGeometry(20, 10, 3), new THREE. MeshBasicMaterial({ color: 0x808080, transparent: true, opacity: 0. 2 })]; const cube = new THREE. Mesh(geometry, material); scene. add(cube); . . . Draw Calls: 0 Mesh Creation Time: 0 However, when rendering an entire apartment, the geometry of a single unit, such as windows and walls, often exceeds 100, which means that if you want to render 100 units, the number of geometries can easily exceed five digits. This is the case when creating one mesh per geometry for rendering. Below is the result of rendering 10,000 shapes. Each geometry was created by cloning and translating a base shape. // The code below is executed 5,000 times. The number of meshes added to the scene is 10,000. . . . const geometry_1 = base_geometry_1. clone(). translate(i * x_interval, j * y_interval, k * z_interval); const geometry_2 = base_geometry_2. clone(). translate(i * x_interval, j * y_interval, k * z_interval + 3); const cube_1 = new THREE. Mesh(geometry_1, material_1) ; const cube_2 = new THREE. Mesh(geometry_2, material_2); scene. add(cube_1); scene. add(cube_2); . . . Draw Calls: 0 Mesh Creation Time: 0 2. Merge Geometries Now, I will talk about one of the commonly used methods for rendering optimization, which is reducing the number of meshes. In graphics, there is a concept called draw call. The CPU finds the shapes to be rendered in the scene and requests the GPU to render them, and the number of these requests is called a draw call. This can be understood as the number of requests to render different meshes with different materials. Here, the CPU, which is not specialized in multitasking, may experience bottlenecks as it has to handle many calls simultaneously. Source: https://joong-sunny. github. io/graphics/graphics/#%EF%B8%8Fdrawcall By manipulating the scene with the mouse, you can check the difference in draw calls between this case and the previous case. In the above case, due to reasons such as the max distance of the camera set or shapes going out of the screen, it is not always called 10,000 times, but in most cases, a significant amount of calls occur. Here, since two materials were used, we used a method to merge geometries. Therefore, as you can see, the draw call here is fixed at a maximum of 2. // The code below is executed once. The number of meshes added to the scene is 2. . . . const mesh_1 = new THREE. Mesh(BufferGeometryUtils. mergeBufferGeometries(all_geometries_1), material); const mesh_2 = new THREE. Mesh(BufferGeometryUtils. mergeBufferGeometries(all_geometries_2), material_2); scene. add(mesh_1); scene. add(mesh_2); . . . Draw Calls: 0 Mesh Creation Time: 0 You can directly confirm that the rendering load has been improved. 3. InstancedMesh Above, we confirmed a method to reduce the load from the perspective of draw calls by merging geometries. These shapes are composed only of shapes using clone and translate, so the base form is the same. This also occurs when rendering objects such as buildings or trees. For example, when there is no need for different forms per floor, or when parts are copied to form the walls of a floor, or when a single type of tree is used with only size or direction changes. It can also be applied when the same form of plane is generated into multiple buildings. In this case, you can further optimize by using instancedMesh, which is efficient in terms of memory and time, reducing the number of geometry object creations. There is a concept called Instancing in graphics. Instancing allows you to render similar geometries multiple times by sending the data to the GPU once and additionally sending the transformation information of each instance to the GPU, which means you only need to create the geometry once. Game engines like Unity also achieve performance optimization using this concept through a feature called GPU instancing. Rendering multiple similar shapes (Unity GPU Instancing) Source: https://unity3d. college/2017/04/25/unity-gpu-instancing/ three. js also has a feature called InstancedMesh that corresponds to this. The draw call is the same as the geometries merge case mentioned above, but there is a significant advantage in terms of memory and rendering time as there is no need to create each geometry separately. The diagram below simplifies the meshes sent to the GPU in processes 1, 2, and 3. Although only translation was used in the example below, you can also use transformations such as rotate and scale. In addition to reducing the creation of separate meshes, you can also reduce the creation of separate geometries, confirming a significant difference in creation time. // You need to add the number of shapes to be used as the last argument of instancedMesh. const mesh_1 = new THREE. InstancedMesh(base_geometry_1, material_1, x_range * y_range * z_range); const mesh_2 = new THREE. InstancedMesh(base_geometry_2, material_2, x_range * y_range * z_range); let current_total_index = 0; for (let i = 0; i Draw Calls: 0 Mesh Creation Time: 0 Of course, transformations such as translate (move), rotate (rotation), and scale (size transformation) cannot cover all cases, so in many cases, the merged geometry method must also be mixed and applied. Conclusion There are various methods to optimize three. js rendering, but in this article, we focused on methods to reduce the rendering burden according to the number of geometries. In addition, I will share my experiences of fixing load issues caused by memory leaks or other reasons in the future. Thank you. . .


Zone Subdivision With LLM - Expanded Self Feedback Cycle

Zone Subdivision With LLM - Expanded Self Feedback Cycle

Introduction In this post, we explore the use of Large Language Models (LLMs) in a feedback loop to enhance the quality of results through iterative cycles.

The goal is to improve the initial intuitive results provided by LLMs by engaging in a cycle of feedback and optimization, extending beyond the internal feedback mechanisms of LLMs. Concept LLMs can improve results through self-feedback, a widely utilized feature. However, relying solely on a single user request for complete results is challenging. By re-requesting and exchanging feedback on initial results, we aim to enhance the quality of responses. This process involves not only internal feedback within the LLM but also feedback in a larger cycle that includes algorithms, continuously improving results. Self-feedback outside LLM API In This Work: Self-feedback inside LLM API examples: Significance of Using LLMs: Acts as a bridge between vague user needs and specific algorithmic criteria. Understands cycle results and reflects them in subsequent cycles to improve responses. Premises Two Main Components: LLM: Used for intuitive selection. Bridges the gap between the user’s relatively vague intentions and the optimizer’s specific criteria. Heuristic Optimizer (GA): Used for relatively specific optimization. Use Cycles: By repeating cycle execution, we aim to create a structure that approaches the intended results independently. Details Parameter Conversion: Convert intuitive selections into clear parameter inputs for algorithm input using LLM and structured output. Adjust the number of additional subdivisions based on zone grid and grid size. number_additional_subdivision_x: int number_additional_subdivision_y: int Prioritize placement close to boundaries for each use based on prompts. place_rest_close_to_boundary: bool place_office_close_to_boundary: bool place_lobby_close_to_boundary: bool place_coworking_close_to_boundary: bool Ask about the desired percentage of mixture for different uses in adjacent patches. percentage_of_mixture: number Inquire about the percentage each use should occupy. office: number coworking: number lobby: number rest: number Optimization Based on LLM Responses: Use LLM responses as optimization criteria, transforming user thoughts into specific criteria. Incorporate Optimization Results into Next Cycle: Insert optimization results as references when asking LLMs in the next cycle, reinforcing the significance of multiple [LLM answer - optimize results with the answer] cycles. Cycle Structure: One Cycle: First [LLM answer - optimize results with the answer]: Ask LLM about subdivision criteria based on the prompt before optimizing zone usage. Second [LLM answer - optimize results with the answer]: Request intuitive answers from LLM regarding zone configuration based on the prompt. After the Second Cycle: After the second cycle, directly request improvement in responses by providing the actual optimization results along with the previous LLM responses. Test Results Case 1 Prompt: In a large space, the office spaces are gathered in the center for a common goal. I want to place the other spaces close to the boundary. GIFs: Case 2 Prompt: The goal is to have teams of about 5 people, working in silos. Therefore, we want mixed spaces where no single use is concentrated. GIFs: Case 3 Prompt: Prioritize placing the office close to the boundary to easily receive sunlight, and place other spaces inside. GIFs: Conclusion This work aims to improve final results by expanding the scope of Self Feedback beyond LLMs to include LLM requests, optimization, and post-processing. As cycles repeat, results approach the intended outcomes, reducing the potential incompleteness of initial values relied upon by LLMs. . .