Post on 03-Sep-2020
1
Good afternoon. Thank you for your nice introduction. Before I’ll start my presentation, I want to introduce myself. I’m
an Assistant Professor at Computer Science Department of Karabuk University-Turkey. Currently, I’m working on Post
doctoral research at 3D GIS research Lab at UTM.//////// So, why we need the automatic generation of 3D models?
2
When we consider 3D navigation systems we may need to solve complex topologies, 3D modelling, topological
network analysis and so on. For realizing all these processes we need 3D spatial data.
Data collection is very time consuming and expensive task; however, it still consumes up to fifty percent of the
available resources. In order to reduce the cost on data collection, data generation from existing archives and plans has
been widely applied. By using scanning method, analogue format data from the archives can be transformed into digital
raster format data.
Automatically extracting geometric models of a building is also difficult and the nodes and links have to be created
manually or half-manually (Pu and Zlatanova, 2005)
3
In this study, Multidirectional Scanning for Line Extraction, MUSCLE Model was used to automatically generate 3D
model of buildings from raster plans. MUSCLE Model developed by us to vectorize the straight lines through the raster
images. The algorithm of the model generates the line thinning and the simple neighborhood techniques for
vectorization processes. Unlike traditional vectorization process, this model generates straight lines based on a line
thinning algorithm, without performing line following-chain coding and vector reduction stages.
Once the network generation process completed, the data is converted into CityGML format in Elodi Ziro (LOD0) by
using some convertion methods.
4
By the first stage of algorithm, threshold process applied to the raster data. In the threshold process, threshold value is
to be determined and every pixel that is darker than this level is assigned black, while every lighter pixel is assigned
white. Therefore, the greyscale image was converted into a binary image.
5
In the next stage, the raster image is scanned horizontally and
vertically pixel by pixel. Nearly vertical lines are obtained by
scanning the images horizontally, while nearly horizontal lines
are obtained by scanning the images vertically.
What do I mean nearly vertical and nearly horizontal?
(Göstererek) According to Figure A, If a line pass through the
region one and central O, it is nearly vertical. If pass through the
region two and central O, it is nearly horizontal. Figure B and C
indicates sample drawings for nearly vertical and nearly
horizontal lines, respectively.
6
(Göstererek) This is a nearly vertical line, and this is a nearly horizontal line in
an floor plan. If we zoom in them, we can see them like these. In the first step of
scanning, the raster dataset is scanned horizontally to determine the thickness of
the lines and the position of the pixel, which is located in the mid-point of the
lines. (Göstererek)The red pixels! The distribution of the red pixels indicates that
they have continuity for nearly vertical lines but they have discontinuity for
nearly horizontal lines.
In the second step of scanning, the raster dataset is scanned vertically, and then,
the same process is carried out for all columns. This time, the red pixels have
continuity for nearly horizontal lines, but they have discontinuity for nearly
vertical lines.
7
This situation is an advantage. Because, after the horizontal
scanning process, if neighborhood analysis is carried out for red
pixels, nearly vertical lines can be obtained. ////////// Nearly horizontal
lines cannot be obtained, because they have no continuity.
8
Like this, after the vertical scanning, nearly horizontal lines can
be obtained.
9
After, they are combined and consequently all lines obtained.
10
In a case where two or more consecutive lines are nearly horizontal or nearly vertical, this process generates wrongly
vectorized lines. For example, three consecutive nearly horizontal lines were horizontally scanned as displayed in above
figure. (Slaytta göstererek) AB, BC, and CD. Due to discontinuity of the red pixels between intersection points A, B,
C, and D, the neighborhood analysis cannot be performed and vectorized data cannot be generated. As seen below
figure, when the raster image was vertically scanned during the second step, the neighborhood analysis generated wrong
vectorization results because of continuity of the red pixels. The algorithm recognizes point A as the beginning point of
the line, skips points B and C, and ends the line at point D.
The detection of wrongly vectorized data is performed by comparing the red pixels with the vectorized lines. The red
pixels and the vectorized line have to be based on the same linear equation. If they don’t, diagonal scanning procedure
is performed.
11
As seen in Figure A and B, wrongly vectorized lines was diagonally scanned. (Göstererek) Under forty-five degree
angle, first, from left to right, and then, from right to left. In diagonal scanning process, if there were two consecutive
red pixels along the direction of scanning, the second red pixel is eliminated (Göster). Thus, vectorized line took a
discontinuous form as shown in Figure C. After applying the neighborhood analysis, this time it is possible to extract
the correct lines as indicated in Figure D. Then, corrected vector data was generated by combining both of the
vectorized lines together. Figure E.
12
So, how do we generate the 3D network models of the buildings?
/// By using the this MUSCLE Model, the 3D Building and Topological Network models are generated automatically
from raster floor plans. It’s based on user defined data, such as floor numbers
and heights. The user interface of 3D Model Generation Software is shown in the Figure.
13
In Network Model, corridor is the main backbone in the floor plan since it connects the rooms with all the other entities
in the building. Therefore, determining and modeling the corridor is very important. Once corridor was provided by the
user, algorithm leaves only the corridor in the image, and then, determines the middle lines based on Muscle Model.
After number of processes on selected middle lines, topological model and coordinates of the corridor are found as seen
in the Figure.
14
In determining the rooms, corridor is excluded from the image and only the rooms are left. Then, by applying the
method, middle point of the rooms are determined and defined as the nodes.
15
After locating the nodes that indicates corridor and rooms, user interactively points out which room nodes connect with
which corridor nodes, and geometric network for two D (2D) floor plan is generated (Göster).
Once stairs or elevator nodes are indicated by a user, the network is automatically designed by assigning different
elevation values for each floor based on various data such as floor number and floor height, and then, Three D (3D)
Network Model is generated as seen in Figure.
16
Next step is Geo-database./// After generating Network model, the geo-database is
created and Network model is transferred to this geo-database to be stores.
17
CityGML is an open data model and XML-based format for the storage and exchange of virtual
3D city models.
CityGML supports different Levels of Detail (LOD). Details of the objects increase step by step
progressive in every LoDs.
18
3D Network Model is represented using Transportation Module of CityGML. Transportation
features are represented as a linear network in LOD0.
19
Once the network generation process completed by using MUSCLE Model, the data is
converted into CityGML format in Elodi Ziro (LOD0).
20
The 3D Network Model is represented as a linear network using Transportation Module of
CityGML. Converted Network model in CityGML format as text is shown in the Figure.
21
Same Network model as 3D is shown in the Figure. This java based application for visualization
the network model in CityGML, was also developed by us.
22
As conclusions, In this study, usage of MUSCLE Model (Multidirectional Scanning for Line Extraction)
was described for automatic generation of 3D topological networks in CityGML format from raster image format of
floor plans.
The algorithm of the model generates the line thinning and the simple neighborhood techniques for vectorization
processes. Unlike traditional vectorization process, this model generates straight lines based on a line thinning
algorithm, without performing line following-chain coding and vector reduction stages.
The results indicate that the model may successfully be used to generate the network models of the buildings and
convert them into CityGML.
23
The outcome of this research could be utilized for future 3D GIS applications, such as 3D
network analysis and visualization.
24
25
26
DETAYLARLA İLGİLİ SORU GELİRSE:
The detection of wrongly vectorized data is performed by comparing the red pixels with the vectorized lines. The red
pixels and the vectorized line have to be based on the same linear equation. For example, if vectorized line is a line
with the beginning point of A and the ending point of B, then, the linear equation for this vectorized line can be formed
as you see in the slide.
When X coordinate of a red pixel is inserted into the linear Equation, and if the difference between the Y value
derived from this equation and the Y coordinate of this pixel is greater than a user defined maximum deviation, the
model defines this line as a wrongly vectorized line.
27
As seen in Figure A and B, wrongly vectorized lines was diagonally scanned. (Göstererek)Under forty-five degree
angle, first, from left to right, and then, from right to left. In diagonal scanning process, if there were two consecutive
red pixels along the direction of scanning, the second red pixel is eliminated (Göster). Thus, vectorized line took a
discontinuous form as shown in Figure C. After applying the neighborhood analysis, the lines failed to have the
acceptable number of pixels were not vectorized. The continuous pixels, determined by implementing diagonal scanning
from both directions, were vectorized as indicated in Figure D. Then, corrected vector data was generated by combining
both of the vectorized lines together. Figure E.