Comparative differences between games design and simulation design - Part 10: Sim engine data sets to games design data sets.

Introduction

So back at the beginning of the journey of my transition into this different industry, I learnt about a unique difference that I talked about in regards to games design and Sim engines with a brief breakdown of the main differences from an overall view.  With that, I thought that the overall internal breakdown of the development process would also be beneficial with the explanation of how both engines work with certain setups to develop both the assets with the background design as well as core environments.

Why do the engines use different data sets to design environments?

The following sections are a breakdown to 3 main areas that allow the engines to look and work as they do for the engine being used.  Although similar in design, the data used works differently between the two different engines.

Game Engines

Games design engines more commonly as described in a previous post works more with the design of rendering at real time (if possible) in it's environment design, as most designer PC's have a GPU which especially with the latest series of graphics cards, contain a secondary processor for large amounts of computational processing.  It makes more sense to have the GPU do more of the computational processing for the graphics and rendering than the CPU.

Height Maps

The height map is the map design on the terrain in the form of a top down colour map that uses two main colours to differentiate the height.  Black is the lowest point of the environment with the highest being white.  This uses various grades of grey to show the steepness of the sides with the 2 main parts of data that are needed are what value is white and what value is black.

It is the same design process that is used for bump mapping that was used before normal maps became more preferred map to be used for the ability to do normal mapping for both internal and external parts on the model without actually being designed on the model itself.

Although most engines now have a terrain editor integrated within the engine itself for more ease of designing the environment to be correct or more realistic as most environments unlike the simulation products will be mostly based around conceptual environments.  Height mapping can still be effective to start of the environment correctly but it can be seen that more advanced tools for creation and editing is within the design tools within the engine itself.



Nav Mesh Design

A nav mesh is what is effectively used to show where an AI or AI pawn can walk around within the game environment.  Best way to describe it is effectively the area that an AI can and cannot go to, if there is no nav mesh in that area, then the AI will go to be boundary area of where it can and stop.

Within UE4, there is a selection of tools that are used specifically to aid in calculating and processing the nav area that the AI can work within that will effectively do the processing in real time whilst you as a designer breakdown and effectively work on other stuff.  If the environment is changed or edited, the nav mesh will recalculate at real time to work around that change to keep you going without hindering the development process.

The other tools that can be used is a nav mesh proxy which combines working nav mesh areas that cannot conjoin naturally due to issues such as temporary blockages or doorways.  In a basic context, a proxy is like a bridge between two lands that allows you to cross over.  Although it may sound like an AI should effectively know how to do this already, you'll be surprised and with that it'll stop and walk into the framework of the door not knowing that i cannot travel over, the AI although can be designed to be clever, it is still effectively bound by the rules you designed and with that, unless instructed or set up to do so, will not think logically for itself.  The proxy essentially tells the AI, there is a bridge to swap over the worlds, this bridge will allow you to crossover and carry on doing your job.

The issues that happen when working around this style of processing is that of the environment itself.  One issue I found whilst making my models is that especially with the walls and environments, there was a problem with the nav mesh being processed and staying processed.  I researched for several days thinking it maybe an issue with the custom collision I made with the objects so I redesigned it, to make sure it wasn't a human error when designing the environment assets.  It was only by coincident that I noticed that the collision of the ground and walls had a unique collision selected (dynamic collision) which was causing the issue.  When I converted it to block all, the nav mesh worked and stayed fixed.

Material Design

Material breakdown which allows for a more defined material alters depending on the environment used.  In this case, I'm going to use UE4 as the example as I'm more comfortable with the working UI and environment it provides.


Somewhat like the UE4 environment for BP's, the material editor has the same style of "noodle" interface that allows for the workings between nodes to inputs / outputs.  The part however I want to breakdown and look into is the materials that are used and how they can develop the overall asset without impacting the engine of game.

So a normal asset will contain the following areas to allow for a more effective looking asset within the created world:

  • Base colour / albedo
  • Roughness or Metallic (Or Both)
  • Specular or Gloss
  • Normal Maps
  • Ambient Occlusion - Dependant on the asset.


Within unreal, the creation of textures allows for the real time editing of objects specifically within the area of real time changes depending on what is needed.  For instance, I can change the values if necessary to make it brighter or dimmer or make the normal mapping become a lot more stronger than usual with the change of a value.

This can provide the effect that makes things look more realistic in the design such as with objects or dents / damage depending on the style of aesthetic. 


Simulation Engines

As previously said, Simulation engines work on a different concept to games design engines.  The design for a sim engine works around collating data of the environment and presenting it in a way that allows for the IG's to see the world but also allows the scenario machine to design the civilians or other important scenario parts.

Also unlike games engines, it seems that although graphically the demand for good GPU's are necessary, from a few months of playing with sim engines, it feels like the CPU does a lot of the footwork instead.  Although this makes no sense to me logically, it depends on the construction and design of the engine but it does feel like they are not optimised in the same way or working / processing the data in the same way.

An example just to talk about how I came to this assumption is based on a project where we had to scenario machines to run the simulation.  One sim engine had working textures whilst other engine had no textures on the buildings with both processor i7 Kabylake and GPU - Nvidia GTX 1070.  I found the FPS on the textured buildings ran more slower and presented more frame drops when running than the one without textured buildings which makes me think that the environment is still being rendered and processed from the CPU.

In this example, I will be more sticking with VR forces as my Sim engine as it's the engine that I have had more experience with than the other engines that I have access to.

Shapefiles

Shapefiles works in an array based structure but with a visual aesthetic to how we design environments within games design.  Shapefiles can be used to refer or hold data for certain attributes such as:


  • line data for roads / pavements
  • polygon data for building design 
  • and point data for asset design such as tree's, cars, extra buildings, weaponry.



So in regards to holding data you can have parts of a map or a size of big area can be the size of about 30 meg big but it can hold 10,000 plus buildings with core data such as height of building, vertices's data for each part, name of building, area of building, xyz in the world to name a couple.  This can then be converted and read as a database within engines like VR Forces, Excel and even UE4 if read and processed correctly.



With this, I worked within a project that ran multiple shapefiles for different things such as roads etc.  Although it can be difficult to edit the environment if something is out of place as it can't be changed in real time, the idea of database holding for all objects makes for a good way to store data visually in case you want to check the data for inconsistencies or edit data at anytime but to ensure that you are running it correctly is to ensure that it is attached to VR forces correctly through the earth file.

DTED

DTED is a different data set for height visualisation that uses numbers and info to make a terrain map for scenarios to use to make the environment more to what it is like in real life.  Also the other main difference is that DTED also works depending of the quality of information that is needed for your project.  The usual formats that tare provided are the following:
  • Level 0 has a post spacing of approximately 900 meters.
  • Level 1 has a post spacing of approximately 90 meters.
  • Level 2 has a post spacing of approximately 30 meters.

Although the automatic response would be that the level 2 data would be better as it will be more data per sector to ensure the quality is more refined but the quality means more data and unlike the shapefiles.  Level 2 means much more memory is needed but to get the best quality you either need the best data in the form of LIDAR or other formats to be able to process and compute that data.

There are some similarities however in regard to games designs height maps but with the added difference that it is black and white on a height is the same as the blue to yellow allowing for different versions of heights throughout the whole map.
The issue is that the DTED sometimes can't be read by the engine.  During the beginning months, I had three different variants of DTED to try and make it work in VR forces and they would not work effectively.  The possible reasons that it didn't work orientates around that the data isn't processing correctly or that the data was corrupted.

Nav Mesh Design

Nav mesh within VR forces works differently to what is used in UE4.  Nav mesh works in the context of using multiple parts of the environment and then making a working nav mesh to be used.

Usually the main parts that are being used to create the nav mesh within VR forces relates to objects that are built or rendered on top of the terrain such as buildings and lakes.   The process is also not done in real time and needs to take the whole processing ability of VR forces to work on the calculations so when you calculate the processing is the time called "tea break" which consists of leaving it and making a tea as it can take up to 10 minutes to calculate sometimes.  The data is rendered on the back end and the visuals are rendered separately which means that if the back end isn't correct, there is not interpretation on the front end.

Although this isn't a massive negative, it clearly shows the differences in the builds between the engine types and I personally feel that its annoying that it needs to be pre baked before civilians don't work correctly or walk around on the ground.

One issue I have though is that the system doesn't show any visual representations like in UE4 for the software which would be so beneficial but overall the massive issue is in regards to the problem when the data used isn't being used correctly by the engine.  

Within VR forces, the main data that is needed is the DTED data which the buildings can be built on but VR forces doesn't work well with buildings sometimes and it can be seen with the final results after the processing is done.  If that doesn't work then the buildings won't be placed on the back end properly so when the nav mesh is processed, no matter how much you can try, it won't work efficiently and your civilians will walk through buildings like they don't exists.





Although I tried to communicate with the company in regards to this, they apparently didn't have this issue and to fix it, i had to make buildings 1000 metres tall so the buildings would come through the DTED terrain so the buildings exist which is a massive annoyance and why personally I hate pre-baking nav meshes.

Any reason to why the big difference?

Although at it's core they may work on similar architecture, the development process clearly alternates from both sides with how the engines compute, process and render there information to present on screen.

For instance from experience, it seems that the GPU is more efficiently used on a games design engine that on a sim engine.  Although not necessarily a bad thing, this could be a
reason to why the data is also different between the two styles of data that is used.

It surprises me in modern time that more engines with the computational processing that can be done don't work more efficiently with real time processing.  The requirements are the same for both style of engines which leads to the idea that its how different or how they're developed shows between the competencies between the two developing teams for each company.  Although as previously said, Sim engines do there job very well, that doesn't mean that engines can't be refined to work more efficiently with more modern game engines.

Comments

Popular Posts