Introduction

Formation of three-dimensional (3D) real properties consisting of (legal) volumes for dwellings (condominiums/apartment units) and other constructions (such as parking facilities and tunnels) has gained increased interest as a tool for managing complex ownership and land use situations. Kitsakis et al. (2018) provide an overview of best practices throughout the world.

Parts of the real property formation process and management of real property may be executed with the use of digital maps and other digitally available information but the result might still be stored and visualized in two dimensions. It is therefore necessary to interpret and convert the data into 3D in order to use it in a 3D digital environment, such as BIM (Building Information Model).

The visualization of real property rights, restrictions, and responsibilities (RRR) in three dimensions has been the subject of several publications in recent years (Višnjevac et al. 2019; Janečka et al. 2018). Studies on how different user groups interpret the visualized cadastral information have not been the main focus of most studies. Dimopoulou et al. (2018), however, state that traditional two-dimensional footprint registration/visualization of rights may create confusion for users and misinterpretation of complex legal relationships. Furthermore, Janečka et al. (2018) notice that registration and representation of legal objects in layers using 2.5D may be a temporary, but not final, solution, since it is difficult to obtain and visualize complete information about property rights relationships and that user-friendly tools for 3D analysis are still missing. In addition, the choice of graphical representation and visual attributes plays an important role in users’ assessment of structural and legal boundaries, as recent research in 3D visualization of RRR data has also pointed out (Wang et al. 2016; Atazadeh et al. 2017). This paper is an extension of research on the visualization of RRRs that was published in Larsson et al. (2018, in print). The study was conducted as part of a research project focusing on visualization and conversion of 3D cadastral information from analogue to digital form (Andrée et al. 2017, 2018a, 2018b, 2020; Larsson et al. 2018, in print).

The purposes of this study are to:

  1. a)

    Devise a novel visual and functional conceptualization for the visualization of 3D RRR data in Sweden and to study how users respond to such a system, and

  2. b)

    Systematically assess and validate the impact of visual attributes, found to be important in this user study.

The functional and visual features of our system presented in “Visualization of 3D RRR data” present new ideas in visual design for 3D cadastres. The feedback received in semi-structured interviews with users provides important lessons learned from the design of the visualizations and it informed the subsequent research in this study, as well as raising interesting issues for forthcoming research in visualization in general. The novel method for assessing the impact of rendering attributes described in “Assessment of the impact of rendering attributes” represents the major contribution of this paper. It provides an unprecedented approach to systematic analysis of visualizations with outreach beyond the application to 3D cadastres. The method enables quantitative analysis of the consequences of graphical design choices with respect to risks of confusing or overlooking relevant elements in a visualization. Its demonstration based on the visualizations used in our interview study validates users’ conceptions and reveals interesting relationships between rendering attributes and their visual effects.

The remainder of this paper is structured as follows: “Related work” summarizes relevant research within management and visualization of 3D cadastre data and RRR, as well as previous research addressing the role of graphical attributes and rendering styles within the field 3D RRR visualization. “Visualization of 3D RRR data” presents the conceptual design of our 3D RRR visualization by introducing the case (study area); describing the interactive visualization prototype, the qualitative user study with semi-structured interviews; and summarizing the main findings from the interview. Feedback from several users in the interview indicated problems in the use of transparency in the visualizations. This initiated subsequent research in a new method for quantitative assessment of the impact of different rendering attributes. In “Assessment of the impact of rendering attributes”, we describe this method formally and apply it to visualizations of our case study. In “Discussion and conclusions”, we discuss the general findings from the interviews, as well as the detailed results from the formal analysis presented in “Assessment of the impact of rendering attributes”.

Related work

The visualization of digital 3D cadastral information, in relation to the BIM, has in recent years been the subject of much research in Sweden and abroad (Andrée et al. 2018a, 2018b; Karabin et al. (in print); Larsson et al. 2018; Pouliot et al. 2014, 2016, 2018; Tekavec et al. 2018; and Atazadeh et al. 2017). The research on visualization covers a wide range of topics such as creating digital cartographic modelling for a 3D cadastre (Wang and Yu 2018); the creation of a model of condominium from floor plans and importing it in an augmented reality (AR) environment (Navratil et al. 2018); and the visualization of a coherent set of 3D property units (Ying et al. 2016). Research has also been conducted on visualization of legal land objects for water bodies in the context of n-dimensional cadastre (Alberdi and Erba 2018). Furthermore, van Oosterom et al. (2019) and Cemellini et al. (2018) have explored the challenges concerning dissemination and visualization of legal boundaries of cadastral parcels in 3D, based on research into problems of ambiguous perception and occlusion (in terms of shape, size, and position) of objects.

Whilst many researchers have pointed out the usefulness of 3D cadastre visualization in combination with the BIM, Neuville et al. (2019) state that decision-making based on 3D visualization remains a challenge due to the high density of spatial information inside the 3D model. To alleviate the ubiquitous risk of occlusion, the authors suggested algorithms for automatic 3D viewpoint management that optimize visibility of objects inside the viewport. Similarly, in Ying et al. (2019), the authors employ and further develop strategies for decluttering 2D map data (Böttger et al. 2008; Haunert and Sering 2011) to tackle the occlusion problem, and they devise 3D distortion techniques to alter the spatial layout of models of 3D property units for better visibility. Instead of manipulating the geometric layout of the visualization model, Wang et al. (2016) suggested using transparency as a visual variable not only to encode data but also to relieve the occlusion inherent in 3D visualizations of cadastres. Accordingly, Atazadeh et al. (2019) and Vandysheva et al. (2011) found mixed transparent and opaque object representations useful to simultaneously visualize structural and legal boundaries in a 3D BIM visualization.

Transparent rendering of objects was also found to be a prioritized functional requirement in the implementation of a test prototype for a web-based 3D cadastral visualization system (Cemellini et al. 2018). Although transparency can be used as a visual variable to encode data (Wang et al. 2012, 2016), Shojaei et al. (2013) assert that transparency, among other parameters, “should be standardized for quick user recognition or user definable to improve the utility of the visualisation”. The use of transparency as a separate visual variable is nevertheless problematic, as transparency in visualization (and computer graphics) is realized through the compositing of colours based on opacity blending as suggested in the seminal paper by Porter and Duff (1984). Wang et al. (2012), who state that transparency and saturation belong to colour, have also recognized this dependency in the context of visualization of 3D cadastres. The latter is true in the sense that any change of transparency of an object affects its apparent colour due to blending.

The authors (Wang et al. 2012) evaluate value and colour as independent visual variables following Bertin’s Semiology of graphics (1983). Meanwhile, in colour theory and many areas of geovisualization, value is seen as one of three perceptual dimensions of colour, in addition to hue and saturation (Seipel and Lim 2017). Notably, in the design of colour scales for maps, hue has been used to depict data categories and has frequently been combined with either value (Aerts et al. 2003; Leitner and Buttenfield 2000; Cheong et al. 2016) or saturation (MacEachren et al. 1998; Burt et al. 2011) to represent another quantitative variable like uncertainty. Considering the distinctiveness of hue, colours with different hues are effective to identify nameable legal and physical objects (Shojaei et al. 2013). However, visual illumination and shading effects in 3D cadastre visualization, like transparency, alter the appearance of colours, not only affecting value (Wang et al. 2012) but also affecting hue and saturation.

The interest in visualization of 3D real property is not limited to theoretical research; also, there are a number of ongoing international case studies, including several pilot projects. One study from the Netherlands investigated the translation of ownership as described in legal documents into legal volumes. This translation was based on the architectural drawing of the buildings, as well as the creation of 3D visualization of involved 3D rights and registration in the interactive 3D PDF format (Stoter et al. 2016). Victoria, Australia, has ongoing work with a prototype of an interactive digital model (ePlan Victoria) to show legal and physical objects in order to identify all RRRs related to land and real property, where visualization has been one of the studied aspects (Shojaei et al. 2018). A third example is Shenzhen City, China, that has developed a 3D cadastral management prototype (Guo et al. 2014). Pouliot et al. (2018) provide additional examples on best practices.

Although there is a large body of research and best practices on visualization of 3D cadastre available worldwide, the day-to-day activities of a nation’s property formation process may—to a higher or lesser degree—today be carried out with the use of digital maps and other digitally available information. The result might also still be stored and visualized with analogue means, such as map sheets illustrating the legal boundaries of 3D property units and associated building construction details stored in the property formation agencies’ cadastral dossiers as two-dimensional (2D) drawings. An example is the Swedish 3D real property formation process, as illustrated in Andrée et al. (2020). It is therefore necessary to interpret and convert the data into 3D representations in order to be used in a fully 3D digital environment, such as the BIM (Building Information Model). In this process of visual design, it is also desirable to cater developers and users with tools to assess the effects of graphical attributes and rendering styles. Although visualization of 3D cadastre has been an area in focus of 3D cadastral research during recent years, we still see a need for studies on how different user groups interpret the visualized cadastral information.

Visualization of 3D RRR data

For this research, an interactive visualization prototype was designed and implemented based on a real working scenario that has been linked to an existing case, where 3D property formation has been carried out. The visualizations aim to develop the presentation of the cadastral index map and are therefore based on the property registry, and not on the cadastral dossiers. Based on this use case, the proposed information requirements and visualization needs were elicited. Subsequently, users tried the visualization prototype and interviews have been conducted with a number of participants from different user groups with different needs.

The selected pilot area is located in Stockholm and includes a number of existing 2D properties and 3D property spaces, as well as different types of RRRs that are presented in the digital cadastral index map. In addition, the ongoing property formation case managed the re-allotment of a 3D property space from one real property unit to another. Three-dimensional documentation has been obtained from the real property owner that, together with the existing documentation in the form of a digital cadastral index map and Stockholm City’s 3D model for buildings, have been used to develop a prototype for 3D visualization of the cadastral index map. In Larsson et al. (in print), interpretations were also made of existing property formation acts and 3D data was created for the re-allotment that in the future could form part of real property formation decisions. This data has been partly used for the visualization in the project that this paper is based on.

Visualization prototype

For our explorative study of visual representations of RRR objects in the context of 3D architectural (BIM) models, we designed an interactive 3D visualization prototype that incorporated existing 3D building models, as well as 3D terrain models. These models were complemented with newly designed visual representations of legal objects, and specific interactive functionality was added to facilitate and demonstrate a possible future workflow in a 3D cadastral management system. For the implementation of the visualization application, we used a high-level VR development tool, VizardTM (WorldViz), that enables rapid prototyping of interactive 3D content.

Data and graphical representation

Buildings in Stockholm’s City model had previously been designed in Revit Architecture. These architectural models were imported into the visualization application as geometric objects. Other 3D models representing legal objects were modelled manually by a land surveyor using MicroStation based on the information in the cadastral index map. In this process, the extent of 2D and 3D property objects in the horizontal plane originated from the (2D) polygon coordinates. Those were combined with height data specified in the cadastre (cadastral dossier). RRR objects that are represented as 2D polygons or 2D polylines in the original documents without specification of their vertical extent were automatically extruded vertically. Hereby, 2D polygons in the original documents resulted in capped prisms, whereas 2D polylines formed open prisms without any caps. The amount of displacement could take on arbitrary values, depending on the type of object.

Two-dimensional polygon areas in the original data that represent volumes of 3D RRR objects were in the 3D model extruded to prisms with the lower and upper caps placed on the specified levels. Other relevant RRR objects with no spatial extent but specified positions within the register map were represented as either small spheres or narrow cylinders at their specified positions in the horizontal plane. The visualization incorporated the following objects:

  • Two-dimensional registry map as a reference plane: A part of the registry map including the pilot area was imported from the national cadastre system in form of a high-resolution raster image.

  • Two-dimensional properties have no vertical boundary limits in the real world, and in the 3D model, they can be represented either as 3D extruded contours (forming uncapped prisms), or as semi-transparent 2D polygons superimposed upon the cadastral index map, as in the case presented here.

  • Three-dimensional properties are most generally represented as polyhedra, and in our case, they were mostly created as extrusion objects and manually refined, where their boundary followed irregular building structures.

  • Three-dimensional building models were incorporated with the level of detail as exported from Revit Architect. We only used the outer shell of buildings for the case study to provide larger context with physical structures.

  • Terrain data was imported into the model as a regular triangular mesh.

  • Three-dimensional easements and 3D utility easements can have spatially very complex structures, and therefore, the modelling was done manually based on the study and interpretation of the original cadastral documents by an expert.

  • Easements with no explicit spatial extent or position are in the original cadastre represented by a point symbol and in text form. We added spherical glyphs to the visualization model at their corresponding positions in the horizontal plane.

For the visualization of the legal units, we chose a fixed colour scheme comprising 8 different colours for the basic types of objects represented. Being aware that the choice of colour in a final system will eventually follow either some (not yet existing) nationally and internationally agreed standard or even be user-definable (Shojaei et al. 2013), we decided colours based on mainly two criteria: (a) to be perceptually distinguishable, we chose hues that are far distant on the spectral locus; (b) objects with similar connotations share similar hues but differ in value (see Fig. 1, top). We offered the opportunity to render the entire model or only selected 3D objects at varying levels of transparency. Three predefined transparency levels (0%, 50%, and 90%) were directly accessible via keyboard shortcuts. Arbitrary levels of transparency could also be adjusted using the mouse wheel. To mitigate the problem of objects to fully blend into their background at high transparency situations, solid objects were represented with an additional 1-px-wide wireframe representation with the same colour as the object, however, with a transparency level reverse to the object’s transparency level (compare Fig. 1, bottom). In addition, selected objects were emphasized with a bold-styled (several pixels wide) wireframe with black colour.

Fig. 1
figure 1

Examples of the visualization prototype. a All object types (legal and physical) are enabled and rendered with no transparency. b Buildings and terrain model are filtered out, and the entire visualization is rendered from a different aspect with 60% transparency (with wireframe outline rendered at 40% transparency). One specific utility easement has been selected and is rendered with no transparency and is emphasized with an extra bold-styled wireframe

Interaction

As a research tool, our visualization prototype featured only essential interaction in order to be easy to learn by novice users. Hence, interaction functionality was limited to navigation of the 3D visualization model and to a few predefined relational queries and parameter adjustments.

Navigation of the model was entirely facilitated through single-handed mouse-based interaction. Rotation in 3D was accomplished by 2D mouse motion based on the intuitive Arcball navigation metaphor (Shoemake 1992). Likewise, zooming and panning were controlled with mouse motion. Keyboard interaction based on shortcuts was mainly used to enter various modes of mouse-driven interaction (e.g. selection, highlighting) or to initiate other predefined functions.

We implemented other interactions that facilitated functions that were deemed relevant for our case study and which were in part reported by Gulliver et al. (2017), Shojaei et al. (2013), or Wang et al. (2012):

  • Selection of 3D objects

  • Adjustment of transparency for the entire model

  • Adjustment of transparency for selected object(s)

  • Selection/filtering of layers of a certain type of object

  • Peeling off selected objects or layers of objects from the model

  • Interrogation and visualization of all legal objects that have a relation to the selected object

  • Query of the cadastral dossier related to the selected object and display in a separate window

Interview study

Interviews and visualization tests were conducted with 13 participants from different user groups. The purpose of these tests was not to collect quantitative data for hypothesis testing. Instead, it aimed to engage practitioners into a realistic working situation and to get them acquainted with the new visualization system so as to qualitatively explore users’ opinions in subsequent interviews. Participants from different user groups with partially different needs were represented, from the Municipal Building Permit Department (City of Stockholm, 1 person); Municipal Planning Department (City of Stockholm, 2 persons; Täby municipality, 2 persons); Property Formation Authority (Lantmäteriet [the Swedish mapping, cadastral and land registration authority], 2 persons; Haninge municipality, 2 persons); and 2 Real Estate Law Consultancy companies (Structor, 2 persons; Svefa, 2 persons).

Participants were initially asked to solve a 3D interaction task wherein they had to navigate to a specific 3D legal object and present this from a certain aspect. This was followed by three work tasks involving spatial navigation and queries on the relational situation of some properties and utility easements. In using the 3D visualization model, users had to search in different ways to find out the relevant information to answer the task-specific questions:

  • How many parcels are included in the real property unit “Arenan 2”, and how many of these are 3D spaces?

  • How many real property units are affected by the easement with registration id 0180K-2010-14445.4?

  • How many square metres is the area of the real property unit “Arenan 1” according to the national Real Property Register?

To find out the answers to those questions, participants had to use the built-in interactive 3D navigation features and relational queries, and access the cadastral dossier of selected properties as well as functions for changing transparency and peeling off individual objects or objects of a certain type. The purpose of solving those tasks was to familiarize users with the 3D visualization prototype for about 15–20 min. Recording of task success rates or completion times was not the objective for this qualitative study; instead, the tasks aimed to engage participants in a realistic working situation by letting them solve real-world questions using the 3D visualizations.

After that, a semi-structured interview was conducted to answer some open questions on the systems they use today, their thoughts about the prototype, and suggestions for improvement of the prototype. The interview also included open questions with regard to different aspects of the prototype, including interaction and graphical representations of RRR objects. Those open questions served as a guide for the topics to be covered during the interview but allowed flexibility for change depending on the direction of the interview. The reason for using semi-structured interviews was to elicit new aspects and ideas not anticipated by the investigators beforehand. The participants had a total of 60 min to get an introduction, to familiarize with the system, and to answer interview questions. Some responses are summarized below.

Results from the interviews

During the interviews, the visualization in the model was commented diversely; some considered it being more useful, whilst others stated it is less suitable compared with systems in use today. As an improvement, users mentioned that it was easier to mentally visualize and understand contexts when using the 3D model since, by providing a representation in 3D, it is easier to see objects located at different heights as well as to assess their relation to existing buildings, property boundaries, and RRRs. The interactive model is also more suited to provide an overall picture to see everything at once, whilst at the same time allowing to rotate, zoom, or select transparency levels of specific objects of interest.

Regarding the possibility of adjusting transparency in the model, participants perceived this as a positive function and felt it could facilitate when objects occlude one another. Transparency also allows seeing all objects at the same time without extinguishing any of the layers. Being able to easily see different objects as well as reveal objects underneath was also considered a good feature of transparency. Transparency was experienced to facilitate navigation and ability to orientate. Participants perceived it as helpful to see relationships, such as seeing existing buildings in relation to property boundaries and RRRs, and deemed it useful for displaying conditions and clearly showing a particular object or type of object. The possibility to choose a level of transparency was deemed useful, and participants saw also the need to easily control the transparency on their own, as different transparency levels might be required in different occasions, for different objects within the model as well as due to different impressions of different users. Users experienced the model to be difficult to use when all objects were entirely opaque.

Participants felt that the 3D model helped in the understanding of relationships, particularly being able to see buildings in 3D, and the relations to property boundaries and RRRs. The test participants believed that the prototype model had potential for development and they provided many suggestions for improvement. The model was generally assessed to be easy to handle and use but new users might need extra time to learn how to navigate within the model. For example, it was observed that it would be good to only see objects in 3D when zoomed in at a certain level of detail, similar to Google Maps. The representation of 2D property boundaries was pointed out as difficult to relate to the 3D property boundaries since the 2D boundaries were not visible at all height levels in the model. Users suggested representing the 2D boundaries as “walls” passing through all heights in the model as well as using outer edges that are more visible for 2D spaces.

Participants expressed different needs and desires regarding what information would be appropriate to add into the model, including the location and configuration of existing building permits, exact shape, location of 3D property units, and easements. Another request was to be able to choose the 2D background between a 2D representation of the cadastral index map, aerial photos, and 2D representation of building rights. Admittedly, it is already possible to see many of these elements today in different applications but a 3D model is believed to give a better visualization. Advantages of being able to see links between different objects and related official records were raised; those links then need to be visualized in a way that is easy to understand and interpret.

Regarding colour, some participants thought that the colour choice was good except that the 2D properties were difficult to see in this model because they were difficult to identify. Participants stated the need for the development of a standardized visualization of the cadastral index map so that the same colours are always used for the same type of object in different systems. Several participants thought that the colour scheme needed to be reviewed, partly because the colours in the prototype did not agree with what they were used to. Also, the colour scheme should be evaluated under perceptual aspects and developed to suit most users based on their professional experiences. Metadata describing the quality of the data, such as the location accuracy of individual objects, needs to be available in the system.

Several participants felt that the system felt foreign but had promising attributes. However, it was pointed out that it must be ensured that cadastral surveyors are able to manage the system to update the cadastral index map. Thus, the system must not be too complex. Competence and skills must be available and users must understand 3D. This is true in all parts of the industry. The system will also require new skills such as architecture expertise and BIM competence, within the authorities concerned, to be able to handle this in a more 3D model-based management of property register and property formation.

The user testing showed predominantly positive views for the 3D visualization, a type of system that is not possible today in the Real Property Register and management system. It also showed that it could facilitate work for different types of needs and groups within the built environment area. Another result is that visualizing 3D properties and rights is not an easy task. The challenges relate to several aspects, such as how the objects are viewed, the legal issues, status, and what colours are used to visualize different types of objects. Users must make sure to drive development in order to create a model that can be further developed and implemented in practice.

Assessment of the impact of rendering attributes

Several comments from the interviewees addressed rendering attributes in the 3D visualizations. The choice of colours was rated as pleasing by some, whereas others noted a big discrepancy in how colour codes are used in their existing systems today. One participant stated that 2D properties were difficult to distinguish. Some agreed that the choice of best colours requires further adjustment. It was further suggested to develop a common colouring standard, whereas others found that colours might even have to be chosen by the user, eventually depending on professional role or individual preferences. The ability to adjust transparency of 3D objects was considered a valuable feature, not least to make many objects visible in context. Meanwhile, a participant stated that transparency made it more difficult to discern details.

Obviously, one best choice of colours in a 3D visualization of property rights does not exist that at the same time meets agreed standards, preferences, and perceptual requirements. Yet, regardless of the colour scheme eventually chosen by a software designer or an end user, the colour scheme should reduce the risks that users will overlook details or confuse objects of different identities or types.

To avoid such pitfalls, we suggest here some computational procedures for analyzing 3D renderings in an objective way. We demonstrate these procedures based on our case study; however, such analysis is applicable to virtually any 3D visualization wherein different objects are to be identified based on their distinct colours.

Effects of colour blending

The use of colours as a unique code to identify objects of a certain type can be problematic in 3D visualizations. Colours will shift as a result of 3D illumination and shading (Wang et al. 2012); hence, there is no one unique colour identifying an object. Whilst different tones of colours are a desirable spatial cue in 3D visualizations, problems arise even more when transparent rendering is used that causes initially different base colours of objects to blend into similar tones. Figure 2 (top row) illustrates this effect by presenting a 3D-rendered view showing four different types of legal objects with decreasing levels of transparency. Visual distinction of objects becomes apparently more difficult with increased transparency. To quantify this effect, we analyzed picture fragments representing the same object type. For this purpose, we included a separate render-pass into our 3D software to create an ID buffer identifying the type of object in every pixel. Figure 2 (bottom, left) shows the content of the ID buffer with four object types. The ID buffer is used to mask out and sample screen pixels per object type. Figure 2 (bottom, middle, and right) shows screen regions covered by the same object type for different levels of transparency. As can be seen, object type 1 (2D property unit) and 2 (3D property unit) are very similar in colour. Also, in the transparent rendering, regions representing object type 2 contain colours similar to the colours of object type 4.

Fig. 2
figure 2

ac Rendering of four fundamental 3D object types with four base colours (light green, dark green, yellow, and red). The same scene rendered at a with no transparency, b 50% transparency, and c 90% transparency. Subfigure d shows the ID buffer with four object types rendered as unique codes. Fragments in the rendered scene belonging to one of the four studied object types with e 0% transparency and f 50% transparency

Table 1, Table 2, and Fig. 3 present colour statistics for the four masked object types from a reference viewpoint, as well as from an alternative viewpoint with slightly different scaling. Whereas for non-transparent rendering, the number of different colours per object (due to shading and illumination) varies from a few hundred to a few thousand; this number grows up to one order of magnitude bigger for increased transparency. As described previously, in the demonstration of our visualizations, 2D property units were always rendered as 2D polygons with 50% transparency on top of the cadastral index map. This explains the nearly constant and high number of unique colours in Table 1 and Table 2.

Table 1 Number of colour tones per object in visualizations with different levels of transparency. Numbers are representative of the viewpoint visualized in Fig. 2 (reference viewpoint)
Table 2 Number of colour tones per object in visualizations with different levels of transparency. Numbers are representative of an alternative viewpoint shown in Fig. 2d–f
Fig. 3
figure 3

The number of colour tones for different object types with increasing transparency for the scene rendered a from the reference viewpoint and b from the alternative viewpoint (right)

When objects are represented with larger variations of colour tones on screen, the likelihood increases that colours representing different objects become more similar that may cause confusion with the observer. Can we assess such similarity of colour tones of different objects objectively?

In the field of computer and machine vision, various methods exist to assess the similarity of images based on colour, among others (e.g. Thakur and Devi 2011; Zhang 2009). Those methods take into account not only colour but also structural information. Other metrics have been suggested that build on perceptually motivated colour features to quantify image quality and quality differences induced by image distortion or compression (van den Branden Lambrecht and Farrell 1996; Carnec et al. 2008). For the purpose of our analysis, we adopted the idea of determining similarity of colour tones based on a comparison of image statistics as previously proposed by Kikuchi et al. (2013). As already advocated by Stricker and Orengo (1995), we analyze normalized cumulative colour distribution functions rather than histograms of colour values for this purpose. We first convert RGB images into an intuitive colour space with hue (H), saturation (S), and value (V) as perceptual dimensions. The cumulative distributions (cd) of the values for hue, saturation, and value are evaluated at n predefined probability levels. As in Kikuchi et al. (2013), we choose n = 6 and the probabilities with

$$ p=\frac{1}{6}\left(1,2,3,4,5,5.97\right) $$
(1)

This yields for each perceptual dimension an n-dimensional vector with the corresponding colour values (H, S, or V) at the specified probability levels pn. For hue, the vector is:

$$ {H}_{\mathrm{i}}=\left\{{H}_{\mathrm{i}}(n)\right\} $$
(2)

with

$$ {H}_{\mathrm{i}}(n)=\left\{{cd}_{\mathrm{i}}(H)={p}_{\mathrm{n}}\right\} $$
(3)

where cdi(H) is the normalized cumulative distribution of hues of a given colour sample i. Here, a colour sample is a set of colours from one type of object. Similarly, for saturation and value, these vectors are:

$$ {S}_{\mathrm{i}}=\left\{{S}_{\mathrm{i}}(n)\right\} $$
(4)
$$ {V}_{\mathrm{i}}=\left\{{V}_{\mathrm{i}}(n)\right\} $$
(5)

with

$$ {S}_{\mathrm{i}}(n)=\left\{{cd}_{\mathrm{i}}(S)={p}_{\mathrm{n}}\right\} $$
(6)
$$ {V}_{\mathrm{i}}(n)=\left\{{cd}_{\mathrm{i}}(V)={p}_{\mathrm{n}}\right\} $$
(7)

Figure 4 illustrates the normalized cumulative distributions (shown here for saturation) from two different samples with the levels pn. The agreement of the two sample distributions i = 1 and i = 2 in terms of hue (H) is determined as follows:

$$ {A}_{\mathrm{H}}={\left({\prod}_{n=1}^61-{d}_{\mathrm{H}}\left({H}_1(n),{H}_2(n)\right)\right)}^{\frac{1}{6}} $$
(8)

where dH is a distance function that takes into consideration that H in the HSV colour model is a periodic variable:

$$ {d}_{\mathrm{H}}\left(x,y\right)=\min \left(\left|x-y\right|,2-\left|\mathrm{x}-y\right|\right) $$
(9)
Fig. 4
figure 4

The normalized cumulative distributions of saturation values from two different samples. Agreement of the two samples is evaluated in terms of the values S in both distributions at equal probabilities pn based on some distance function

The agreement in terms of saturation is calculated in the same way as for hue:

$$ {A}_{\mathrm{S}}={\left({\prod}_{n=1}^61-{d}_S\left({S}_1(n),{S}_2(n)\right)\right)}^{\frac{1}{6}} $$
(10)

However, with a simpler distance function:

$$ {\mathrm{d}}_{\mathrm{S}}\left(x,y\right)=\left|x-y\right| $$
(11)

For the agreement in terms of lightness, Kikuchi et al. (2013) suggest to calculate agreement based on ratios instead of differences, referring to Weber-Fechner’s Law of human sensation of lightness stimuli. Hence, we calculate agreement with respect to value as:

$$ {A}_{\mathrm{V}}={\left({\prod}_{n=1}^6\frac{\min \left\{{V}_1(n),{V}_2(n)\right\}}{\max \left\{{V}_1(n),{V}_2(n)\right\}}\right)}^{\frac{1}{6}} $$
(12)

The colour similarity index is finally a product of the agreements in hue, saturation, and value:

$$ \mathrm{CSI}={\left({A}_{\mathrm{H}}{A}_{\mathrm{S}}{A}_{\mathrm{V}}\right)}^{\frac{1}{3}} $$
(13)

This statistical approach to determine colour similarity is agnostic to structural features in the images, and therefore, it is suited to compare colour similarity of arbitrary regions (in the same image) that may have different shapes and textural structures, as is the case in our visualization.

Results from the colour similarity analysis

We implemented the calculation of colour similarity index (CSI) as MATLAB scripts to study the influence of transparency on the results of our visualizations. Figure 5 shows the results for the similarity metric, CSI, in an analysis of our visualizations rendered with different levels of transparency from two different viewpoints. In Fig. 5a and c, colours of the four object types rendered at different transparency levels are compared with the colours of the same object type rendered with no transparency. What can be seen is that the colour appearance of one type of object becomes increasingly dissimilar (lower CSI values) in transparent conditions when compared with the colour appearance when the same type of object is rendered with no transparency. This trend is apparent for all objects but with different emphasis. Therefore, the effect seems to be dependent on the chosen base colour of the object. In a pairwise comparison of the colour tones of different object types (Fig. 5b and d), it can be seen that colour similarity of different objects tends to increase as transparency is increased for most pairwise comparisons. Also evident is that the colour similarity index in a comparison of 2D and 3D property units is constantly high across all transparency levels. This is expected, since the same hue (green) only with different values and saturation had been chosen for these two types of objects. These findings (in terms of CSI) agree also well with the perception of some users, who stated that 2D property objects were difficult to distinguish. In summary, the analysis based on CSI reveals that colours of the same type objects become increasingly dissimilar and colours of different objects become increasingly similar when using increasing levels of transparency.

Fig. 5
figure 5

Similarity of colours comparing a the same objects and b different objects rendered at different levels of transparency for scene rendered from the reference viewpoint. Corresponding results for the alternative viewpoint are shown in c and d

Analyzing risk of misinterpretation

The colour similarity metric, as presented in the previous section, is purely based on colour information, and therefore, structural features present in the compared visual entities do not affect it. The backside of CSI is that it does not reveal where, within a visualization, colour similarity of different objects is the biggest and, hence, where the risk of misinterpretation is the largest.

To study the latter, we suggest the concept of an unbiased machine-based observer to automatically classify image pixels based on colour and by evaluating classification errors. We propose a machine learning approach based, again, on image statistics as gathered already in the previous section. We collect samples (pixels) from objects of different types to train a k-nearest-neighbour (kNN) classifier, where the object IDs are used as class labels and hue, saturation, and value as features. The very widely used kNN method is a model-free classifier that does not generalize well. Instead, it performs very well on the trained data even on irregularly shaped and nested clusters. For our purpose, generalization is not relevant, as we only are interested in a particular visualization to be analyzed. Hence, we can expect good classification performance looking only at the data to be analyzed at every instant.

We tested different parameterizations for the kNN classifier and found optimal classification performance for k = 3 (on average for all sizes of the training datasets). We varied the sizes of the training data initially from 500 to 10,000 random samples from each of the four object classes. We found that classification performance naturally improves as the size of the training data set increases, but the rate of improvement grew smaller beyond 2000 samples. For our analysis, we therefore performed kNN classification with k = 3, and t = 2000 training samples from each of the 4 classes, and with 1000 test samples in every class. Overall, classification error per class (object type) was determined as the percentage of misclassified test samples from the respective class. Error values were averaged from ten independent classification experiments (i.e. independent training and testing sessions). We then repeated this analysis for varying levels of transparency.

Results from the spatial analysis

Figure 6 shows the overall result of automatic classification of objects based on colours in visualizations with increasing transparency levels. The graphs show, similarly for the scene rendered from different viewpoints, that the risk of object misclassification increases rapidly as transparency rises above 50%. This pattern is almost identical for all object types. Figure 7 presents the spatial distribution of misclassified samples (red) from one run (k = 3, t = 2000). For better contrast, the analyzed visualizations are shown in grey. The results of this analysis suggest that errors are not sporadic; instead, they occur in areas with multiple overlapping objects and nearby object borders. In addition, areas at risk of misclassification grow with increasing transparency.

Fig. 6
figure 6

Errors in kNN-based classification of objects based on colour in visualizations with increasing transparency Results for the reference view are shown in (a) and for the alternative viewpoint in (b)

Fig. 7
figure 7

Spatial distribution of classification errors for the scene rendered from the reference viewpoint with 0% (a), 50% (b), and 90% (c) transparency. The corresponding results for the alternative viewpoint are shown in (d)–(f)

Visual saliency analysis and transparency

The analysis in the previous sections reveals a dilemma of 3D visualizations: Whilst details in a scene are likely to be entirely occluded in renderings without transparency, increased transparency can provide a holistic view of all objects in a scene, as acknowledged by participants in our tests. However, with increasing transparency, details blend into their surroundings and become less visually salient.

The importance of visual saliency in graphical representations and its influence on human decision-making has been discussed in several studies (Glaze et al. 1992; Sun et al. 2010; Milosavljevic et al. 2012). The concept of saliency maps has been described by several authors to model how visual attention is driven by low-level visual features (Koch and Ullman 1985; Itti et al. 1998). Many researchers agree that attention is not only stimulus-driven (Itti and Koch 2001; Parkhurst et al. 2002; Underwood et al. 2006). Still, the concept of visual saliency maps has been further improved recently, calibrated with eye-tracking studies in the field of data visualization (Matzen et al. 2017), and has been suggested for the assessment of visualization designs (Wall et al. 2019). In our study, we use the reference implementation of the Data Visualization Saliency (DVS) model by Wilson (2017) and described in Matzen et al. (2017) to produce saliency maps of our visualizations under different conditions.

Results from the saliency analysis

Figure 8 demonstrates the result of the DVS model for the same scene visualized with different levels of transparency. The output of this model is a saliency map with values in the interval [0–1.0] that is visualized here in colours from blue to red. The circled areas in Fig. 8 identify the location of a 3D facility unit that is already difficult to distinguish with no transparency but it becomes less and less visible as transparency increases. In our visualization, like in many 3D tools, we render transparent objects with modestly emphasized edges in order to enable the user to distinguish their outlines even at very high transparency levels. The effect of which is indeed evident across the entire image in the saliency map at 90% transparency (Fig. 8f). The DVS model is, as the authors state, sensitive to low-level visual features that include not only colour stimuli but also local structural information such as present in line drawings and text (Matzen et al. 2017). This results in the most transparent visualization in our example in comparably high levels of saliency across a large area of the visualization.

Fig. 8
figure 8

A visualization rendered with different levels of transparency a 0%; b 50%, and c 90%. df The saliency map produced by the DVS model overlaid on the visualizations

This corroborates that different rendering styles (e.g. in terms of texture, edge enhancement, or local transparency/colour adjustment) are effective to increase visual saliency to some degree, and therefore, it seems reasonable to selectively manipulate rendering styles in order to prevent objects of interest from losing visual saliency, if it is important to attend to them. To test this hypothesis, we suggest differential saliency maps to investigate the consequence of a diverting graphical rendering style of some object in a visualization. Here we demonstrate the case of the 3D property unit identified in Fig. 8.

For enhancement of visual saliency in high transparency rendering, we suggest two interventions: (1) A strongly emphasized edge representation by increasing the thickness of the edges and choosing an intensity corresponding to increasing levels of transparency of the object, and (2) by setting the transparency of selected objects to 0% in addition to the measures in (1). Figure 9a and d show the visualization and its saliency map at 90% transparency, and Fig. 9b and c show the same rendering with interventions (1) and (2) applied to the 3D property unit in question. The differential saliency maps also confirm what is visible to the bare eye. Figure 9e shows the difference in saliency between Fig. 9b) and Fig. 9a), whereas Fig. 9f) shows the difference in saliency between Fig. 9c and Fig. 9a. The differential saliency maps utilize a diverging colour map to show changes in saliency in the range [− 1.0–1.0]. Also, for increased clarity, the original visualization is shown as a grayscale image. Both interventions led to an increase in visual saliency for the object of interest, at the cost of slightly lowered saliency across other areas in the visualization.

Fig. 9
figure 9

The visualization rendered at 90% transparency with different rendering styles for one object of interest. a Standard rendering style with slightly edge emphasis, b strongly emphasized edges based on increased line thickness and contrasted colour, and c strongly emphasized edges with no transparency on the selected object. d The saliency map of the unaltered visualization in a. e and f show differential saliency maps

Discussion and conclusions

As described in Andrée et al. (2018b) and Larsson et al. (2018), the digitization of existing analogue 3D property boundaries, etc., is an extensive work, as the documentation (the legal dossier) does not clearly describe the geometry of the property and the rights’ boundaries. We have seen that it is not an easy task to visualize 3D properties and RRRs in different ways. The challenges are related to several aspects, such as how the objects are viewed, the legal issues, status, and what colours are used to visualize different types of objects.

Participants in the interview study were predominantly positive regarding the suggested 3D visualizations and how it could facilitate users’ work for different types of needs and groups in the built environment industry. However, in order to fully develop and implement such a 3D visualization model for practice, users must emphasize the need and demand such development in their respective organizations.

It should be investigated which items are suitable to be presented together and/or should be activated/deactivated in the handling of various types of cases involving 3D property information and how different statuses on different types of properties, rights, and boundaries can be visualized. Furthermore, research is needed regarding the implementation and visualization of information from different authorities as well as regarding which property information should be incorporated that is not, or only rudimentarily, visualized in the cadastral index map today. Also, more detailed and customized queries than available in our demonstrator should be realized.

The skills regarding digital 3D information and visualization needed in future case handling of 3D real properties should be studied. The systems will require new skills, competence in architecture and the BIM, within the built environment industry to be able to manage procedures in a more 3D model-based system of the real property register and property formation.

One conclusion from the results of the interview is that a visualization needs to be customizable, as also suggested by Shojaei et al. (2013). It should allow reconfiguring the colours to meet users’ personal needs and be able to optimize for contrast and interaction with other transparent objects. Ideally, colour palettes for 3D visualization are included in a future product as part of the application’s configuration database with default settings and individual settings for each user.

Considering the assessment of effects of various rendering attributes in our visualization, we suggested three computational approaches that cater developers as well as users to making informed choices in regard to the use of colour, transparency, or edge enhancement. The calculation of the colour similarity index (CSI) is, with a few differences regarding our choice of the perceptual HSV colour model, similar to the algorithms referred to earlier (Kikuchi et al. 2013). What is novel is the use of such quantitative colour similarity metric to systematically study the effects of graphical styles in 3D visualizations. The colour similarity index allowed us to measure and compare the combined effects of both colour blending as a result of transparent object rendering, and from shading and illumination.

The results showed that objects that have quite distinct colours when being rendered with no transparency tend to become increasingly similar as transparency is increased. This similarity is determined not only based on hue; instead, colour distances are determined in terms of differences of all perceptual properties of colour stimuli according to Eqs. (9), (11), and (12) in “Effects of colour blending”. Therefore, we assert that this method is representative of how humans see the visualizations. In fact, one might argue that the analysis of colour similarity only confirms what is obvious from visual inspection. We agree that it does and by this we believe that the method is a valid and objective means to assess the effects of different rendering styles in a visualization. This is of great benefit, because this kind of colour similarity analysis can be incorporated into an (semi-)automatized process to optimize colour-coding schemes to be distinguishable under different rendering settings (e.g. transparency, illumination).

To estimate the inherent risk of confusing objects due to their similarity in colour, we suggested an unbiased machine learning–based classifier based on the kNN algorithm. There are strengths and weaknesses to this approach. From the technical point of view, the kNN classifier is very easily adopted, and because it is trained for every visualization (or pair of visualizations) to be analyzed, it can be made to fit the training dataset very well, even if samples are very irregularly scattered in feature (i.e. colour) space. A computational classifier is capable of detecting colour differences (per pixel) that are far smaller than a human observer possibly could notice, as long as the statistics in the training data captures these minute differences. Hence, the classification errors obtained with the kNN classifier based on pixel colour are most likely underestimating the errors of a human observer. On the other hand, human visual object classification is compensating this by not only involving bottom-up visual processes (colour, gradients, etc.) but also by utilizing top-down processes involving a contextual analysis and understanding of the visualization. In the end, absolute levels of classification errors are not really crucial, but reproducibility is (within acceptable confidence intervals), when it comes to systematically analyzing trends and patterns of the effects of different rendering styles on object misinterpretation.

Comparing the results from the colour similarity analysis with the classification experiments, we conclude two interesting findings: It seems like 60% transparency is a critical threshold, below which misclassification seems to accelerate. Similarly, with a transparency level of approximately 60%, colour similarity was about 0.8, a common lower threshold of colour similarity in both comparisons within and between objects (see Fig. 5). These findings are not incidental for a particular camera view; instead, they showed in our study to follow the same pattern regardless of viewpoint or scale, as illustrated with the two different viewpoints in this paper. Users in our interview study deemed the possibility to choose a level of transparency as useful, and in this context, some critical transparency level may serve as a guiding limit to avoid users running into the risk of misinterpretations. We further conclude from the results of our colour similarity analysis and classification experiment that, in contrast to Pouliot et al. (2014), transparency should not be combined with visual variables like hue and saturation, as alpha blending in transparent rendering affects all three perceptual dimensions of the colours of objects.

Whereas the analysis of colour similarity mainly focused on the adversarial effects of using transparency as a visual attribute, the differential visual saliency maps, as suggested in “Visual saliency analysis and transparency”, can be applied to the study of other graphical effects or attributes in visualizations more generally. Again, the results from the saliency analysis showed that high transparency rendering of objects can cause substantial loss of attention on those objects, as the results in Fig. 9a and d demonstrate. This echoes well with the previous finding that colours of different objects (eventually nearby one another) become increasingly similar as transparency increases more. The differential saliency maps also revealed that such undesirable effects can be mitigated by augmenting objects with other graphical styles. In our particular study, we investigated edge-enhancements with different edge widths, as well as selective transparency adjustments. Both showed an effect in terms of increased visual saliency.

The metrics used in our analysis of colour similarity and visual saliency are unitless, normalized measures. Therefore, it is difficult to draw far-reaching conclusions regarding effect sizes and their significance. Nevertheless, the computational approach suggested in “Assessment of the impact of rendering attributes” allows for a comparative assessment of different visualization alternatives and to see worsening or improving effects of specific interventions taken with respect to graphical design. Therefore, this paper does not claim identifying the best choices of colours, line styles, or other graphical styles. Such design choices are eventually governed by several aspects beyond the control of the work presented here, such as regional standards, coherence with other existing professional tools used, or individual preferences. Instead, given a number of different alternatives for the graphical representation of 3D cadastre data (or any other 3D data), the analytical approach suggested in this paper provides support in the design phase of a 3D visualization application to find a relatively optimal solution within the bounds of existing requirements and preferences.

The systematic analysis of colour similarity and visual saliency emerged from some of the opinions on graphical styles stated in the initial interviews, but, within the scope of this study, no follow-up experimental study was conducted to compare the quantitative results with user’s perceptions of different graphical styles and attributes. For the saliency analysis, such validation already exists partly based on gaze-tracking data (Matzen et al. 2017), and for the colour similarity analysis, the presented trends in Fig. 5 agree well with our visual experience of corresponding renderings (Fig. 2, top). Yet, further work, in form of psychophysical experiments or controlled user experiments, is needed to establish and to quantify the relationship between the metrics and observations presented here and users’ perceptions of visualizations using different graphical styles in order to assess the strength of the effects resulting from the alteration of various graphical styles in a visualization.

To summarize the essence of the work presented here, we conclude that different visual attributes for visualization of 3D RRRs matter both to users in our interviews, and in terms of a quantitative analysis. Users acknowledged the potential of using transparent rendering for better overview and asserted the possibility to change colours. Nevertheless, our metrics and methods for analyzing visualizations also revealed the risks of confusion and reduced visual saliency of objects in a scene that come about with the variation of various rendering attributes. The bottom line message is that whilst there exist opportunities and risks with users or designers manipulating graphical attributes, we have also the means to systematically identify those risks for better guidance in the design of visualizations.