In 2010, Durham, NC passed a $37.5m bond referendum to repair the city’s aging streets. The city’s Public Works department was tasked with managing this major project, and to do so, they relied on three information systems: ESRI ArcGIS to map out which streets would be repaved, Primavera P6 to plan and schedule the paving, and Primavera Contract Manager to track the progress and payment of the many contractors laying asphalt. As the project got underway, a problem soon emerged: each of the three systems held only its own slice of project information, and there was no central place to get a quick, global view of project status. Instead, City staff had to consult Contract Manager to see what work had been done, P6 to know what was supposed to have been done, and ArcGIS to see where it was done. An earlier solution, developed in house by the City, attempted to integrated project information into a layer in ArcGIS, but while the effort proved the usefulness of such integration in theory, updating the layer with new information proved to be so labor intensive that it often lagged weeks behind the current state of affairs.
Concrete Data saw the City’s dilemma as a chance to leverage our strengths. We have extensive development experience with all three of their platforms and had previously handled integration projects for the City, so we knew the particulars of their installation. On the design side, the City wanted the application to be used by both staff and the public, so we knew that the final product had to be clear and intuitive.
In conversation with the City, we learned more about how they were managing the overall paving project and about what information would be most useful to display:
From these criteria, we developed a few mockups of how we could combine all this information into an effective display. Given the geographic spread of the paving project, a map was clearly the primary way to organize the project information, but how to represent the other pieces was initially a puzzle. After experimentation, we settled on using a colored map pin to mark each block, where the color corresponded with the contractor-reported status of the block’s paving: blue for not yet started, yellow for in progress and green for complete. Clicking on a marker would display a popup with the detailed schedule and cost information for the chosen block.
A pin shows the location of each block to be repaved, with color representing the status of the block's repaving: blue means not started, yellow in progress, green complete
Clicking on a pin brings up detailed information about schedule and costs for that block.
Further experimentation with the mockups revealed another concern: a few markers on a map is useful, but 5,000 can turn into a confusing mass. To alleviate this issue, we supplemented the map with two features. We added a filter box which would restrict the displayed blocks to only those streets matching the entered text, and we placed a clickable text list of all displayed blocks to the left of the map. That way, a user looking for a particular block could enter the name of the street in the filter box and then click the particular block of interest from the text list.
Typing in a road name filters down the list of blocks in real time, and a particular block displays its details.
Before we could build the UI, we had to build the data system behind it. The first hurdle we faced was to decide where and how to combine the data from the three systems. Fortunately for us, the City had standardized on a set of GIS ID numbers across all their systems, so we had a pre-existing key with which to cross-reference each. From past experience, we expected that combining data on the fly would likely lead to a fragile system, as the robustness and speed of the system would be limited by its weakest link. Instead, we decided to aggregate the information into one repository which would contain all the information needed by our web application. We considered building our own database to serve as this repository, but some investigation revealed that P6 already had the capability to store all the kinds of information we needed. Importing the information into P6 had the side benefit of making P6 a more useful project management tool, as project planners now had actual schedules and costs available to them.
Step one, therefore, was to build a data pipeline from Contract Manager and the City’s contractor reporting tool into P6. Fortunately, Primavera P6 has an excellent interface for automated integration, the Web Services API, with which we have extensive experience. To pull data from Contract Manager, we had to rely on a more difficult and error-prone API, one that Primavera calls the XMLAPI. Again, our experience helped us, as we have done enough integration projects with Contract Manager that we have developed our own adapter layer on top of the native interface to make using it easier and more robust. The final system, the contractor reporting tool, was built on top of Microsoft SQL Server, and we queried the information directly from its tables. A custom utility queries these interfaces, translates the data into the format we needed and pushes it into P6. Because it was important that data be as recent as possible, the utility was built to update only the parts which had changed since its last run, allowing most runs to complete in seconds. As a result, the data in P6 was at most minutes old.
Step two was to build a web interface for our application to query. Even though we were pushing data into P6 using its Web Services API, we did not feel that API was the right technology to query from a web page. P6’s Web Services API is built on a set of core technologies – SOAP, WSDL, XML and authenticated sessions – which are ideal for industry-strength integration applications but which carry quite a lot of overhead for a web application. Instead, we built an adapter layer that bridged the Web Services API and the web: the adapter component lives on a web server and keeps an authenticated session open to P6 via the Web Services API. When it receives a request from the web application, it queries P6 and translates the response into a lightweight JSON message for the web application to consume.
As the final application came together, we were quite pleased: even with so many blocks to display the page loaded in seconds, and filtering and navigation occurred almost instantaneously. A dataset which previously required much digging through three different systems was now available for quick exploration in real time.