QBatica is a proprietary on-premise software that I wrote on several occasions between July 2018 and September 2019. Its goal is to supply 3D real-time pre-visualization of geometries offering the opportunity to arrange some elementary lightset, and allow the wide-scale procedural generation of terrains as well as the massive instancing of geometries. It has been written making use mainly of Python programming language, the related OpenGL libraries (as graphic engine), and the PyQt libraries (to compose the graphic interface), external scripts in Python such as Roadmap Generator and the instanceGenerator, and several GLSL custom shaders.
The project was born by the need to compensate for some lack of the main 3D rendering softwares (such as Autodesk Maya) regarding the management of the graphic engine customization (see Studio soluzione megalopoli), as well as to allow me to have an "open source" independence from paid computer graphics softwares or from limitations (at the time) of other free softwares. With the introduction of the EVE render engine by Blender those needings have been compensated, and therefore the development of this program has been no more needed.
The software allows to create displacements, procedural terrains, massive instancing of procedural elements and forests. Between the various feature is possible to configure animated cameras trough keyframe on the timeline by importing and exporting them, skybox, ambient occlusion, volumetric lights and so on.
In QBatica you can generate procedural geometric modeling thanks to a virtual grid which resolution is manageable through parameters. The amount of polygons is based on the user parameter, the position of the generated vertices in the space is assigned through map.
The virtual grid is handled through the QBaticaGrid
scene object which extends the QBaticaGeometry
. It depends on a series of attributes that allows to indicate the resolution and the heighmap texture. The grid is then equipped with the mirrorMaze
attribute which allows to replicate the procedural generation for a definable number of times around the central geometry, in order to generate depth and prevent the edge of the grid to be sightable from the camera position. This attribute too is settable in real-time (depending on the user GPU performances, lagging increases proportionally to the number of polygons) in order to be calibrated according to the ratio between quality of the shot and performance of the viewport or of the render.
The displacement is calculated on the resolution defined by the user and the height information supplied by the heightmap.
def __calculateDisplace(self, xCoord, zCoord):
if "null" in self.heightMap or self.heightMapIm is None:
return 0.0
width, height = self.heightMapIm.size
heightMapSize = width * 1.000
"""
u : HMS = coord : 34.000
"""
u = ((xCoord) * (heightMapSize-1)) / 34
v = ((zCoord) * (heightMapSize-1)) / 34
colorMapValue = self.heightMapIm.getpixel((u, v))
dispFactor = colorMapValue[0] / 255.000
dispFactor *= self.heightMapMultiplier
return dispFactor
Following the execution of the displacement, the QBaticaGrid
object takes care of the recalculation of the normals of the newly generated polygons.
def __recomputeNormals(self, dataFaces, dataVertices, dataNormals, dataTangents):
i = 0
while i <= len(dataFaces) - 4:
faceA = dataFaces[i] - 1
faceB = dataFaces[i + 1] - 1
faceC = dataFaces[i + 2] - 1
faceD = dataFaces[i + 3] - 1
adir = [
dataVertices[faceB][0] - dataVertices[faceA][0],
dataVertices[faceB][1] - dataVertices[faceA][1],
dataVertices[faceB][2] - dataVertices[faceA][2]
]
bdir = [
dataVertices[faceD][0] - dataVertices[faceA][0],
dataVertices[faceD][1] - dataVertices[faceA][1],
dataVertices[faceD][2] - dataVertices[faceA][2]
]
recomputedNormal = np.cross(adir, bdir)
recomputedNormal = glUtils.normalizeByMax(recomputedNormal)
recomputedTangent = [recomputedNormal[0], recomputedNormal[2], recomputedNormal[1]]
dataNormals[faceA] = recomputedNormal
dataNormals[faceB] = recomputedNormal
dataNormals[faceC] = recomputedNormal
dataNormals[faceD] = recomputedNormal
dataTangents[faceA] = recomputedTangent
dataTangents[faceB] = recomputedTangent
dataTangents[faceC] = recomputedTangent
dataTangents[faceD] = recomputedTangent
i += 4
return (dataNormals, dataTangents)
The software also allows to procedurally instantiate a massive amount of geometries.
For the "urban" massive instancing, instead, I developed the Instance Shader
, which is based on the reading of a JSON generated by the instancesGenerator.py
plugin-in. The instancesGenerator_overlapChecker.py
is a more advanced version of it. These plug-ins are thought to read the maps produced by the Roadmap Generator and to instantiate (at the data level) up to tens of thousands of buildings, positioning them without overlapping each other, without exceed on the road (by managing scale and position) and rotating them according to the information supplied by the maps through the RGB values. It's capable, moreover, to generate the altitude of the buildings, the type at which then, through shader, is assigned a specific block geometry.
I experimented with the shader and the plug-ins to realize an eight-millions-inhabitants-city. After modeling a set of blocks in Maya, I imported the OBJs in the scene and applied to each one of them the Instance Shader, in order to get them read the related JSON file with specific distribution indications of that determined typology.
I conceived the Forest Shader aiming at generating massive trees and forests instanceings, but clearly it can be used for similar purposes. This shader, if assigned to a geometry, allows to instantiate it depending on a specific JSON file of which the protocol is established to be compliant with the same shader. This JSON file can contain a long array of 3D space coordinates, each of these coordinates will instantiate the single geometry duplicate. Through the shader parameters it is possible to execute multiplication, addition and subtraction operations on these positions, in order to allow to follow, in case of the forests, the altitudes of terrain according to what is necessary. It's furthermore possible, as in the other shaders, to apply the typical Phong textures for diffuse, secular, reflection, normal and so on.
The JSON file can be generated through a Python plug-in that I named forestInstancesGenerator.py. This script generates the JSON containing the positions and the rotations of all the "trees" based on textures that are received by user input.
These kind of shader are based on leveraging the OpenGL instancing technology. As soon as the geometry (QBaticaGeometry) is initialized it processes positions and transformations indicated in the JSON. Afterwards, at drawing time, I make use of the glDrawArraysInstanced
instruction supplying an array of the positions of the instances. This instruction is invoked both in the Framebuffer, in the GBuffer and in the Render Frame. This is done, clearly, if it's expected that the shader supports instancing, as it is described by parameters of the Forest and of the Instance.
if self._coreModule.scene["shaders"][self._shaderIndex].hasTessellation():
glDrawArrays(GL_PATCHES, 0, self._geometry_size)
elif self._coreModule.scene["shaders"][self._shaderIndex].hasInstancing():
glDrawArraysInstanced(GL_QUADS, 0, self._geometry_size, len(self._instancesPositions))
else:
glDrawArrays(GL_QUADS, 0, self._geometry_size)
The instruction glDrawArraysInstanced was made for OpenGL 3 to very leverage the runtime calculation potential of the GPU.
In the same way the binding of Vertex Array, Attribute Pointer and Attribute Divisor has to be executed with a specific solution for the shader that supports instancing:
if self._coreModule.scene["shaders"][self._shaderIndex].hasInstancing():
self._vertexInstancesPositions.bind()
glEnableVertexAttribArray(self._vertexInstancesPositions_loc)
glVertexAttribPointer(self._vertexInstancesPositions_loc, 3, GL_FLOAT, False, 0, None)
glVertexAttribDivisor(self._vertexInstancesPositions_loc, 1)
self._vertexInstancesPositions.unbind()
self._vertexInstancesTransforms.bind()
glEnableVertexAttribArray(6)
glVertexAttribPointer(6, 3, GL_FLOAT, False, 0, None)
glVertexAttribDivisor(6, 1)
self._vertexInstancesTransforms.unbind()
QBatica makes available the following typologies of shader to be assigned to the geometries imported in the project:
GLSL ForestVertex.vs
, the others stages are those of the Standard Shader.CityGroundFragment.fs
e CityGroundTessellationEvaluate.tes
.MaskFragment.f
s) overwrites the main one assigning a flat RGB color not-reactive to the lights; it’s thought for the multilayer sequences and allows to render utility masks to be used for compositing.Follows the utility shaders used by QBatica to view some utility elements:
I wrote these shaders in GLSL to be utterly autonomous on the binding of the attributes and to manage particular utility visualizations. On each shader is possible to activate flags that will render in the viewport respectively the position, the normal and the zdepth of the geometries at which they are assigned. This allows us to do the visual debugging on some information related to the various polygons or to the geometries generated render-time via GPU. There is, also, the possibility to activate the fog on the shader which, combining with others shaders on which is activated, allows the realization of the scene fog.
Shader parameters are defined in a Python file (<nome shader>Paremeters.py
) that contains them in turn in a JSON structure. The UI classes of QBatica read these files and procedurally generate the regulation panels of the related shader's attributes.