Solidify existing code to prepare for planning implementation.

CONDUCT:

Decoupled previous code from visualisation/debug

Created PlanningParameters class to handle all of the planning input

Modified all existing classes (pathing, viewsheds etc) to take only a PlanningParameters object as the constructor

Set up PlanningParameters to create all data requirements (viewshed, terrain graph and cost map etc)

Created UnityPlanningScript as an interface between the unity engine and the PlanningParameters object. Also acts as a stand in planner based off publicly available methods (eg. Calculate path based on where a gameobject is located in the unity engine – transforms unity coordinates to TerrainObject coordinates and calls relevant methods etc)

Tested creation of viewshed etc on a seperate thread. Doesnt result in faster creation but does prevent program freeze while calculating.

Created UI to test initialisation, path creation etc

Created a simple Plan class to hold the details of a finished Plan

Created seperate Visualisation class that uses information from the above data classes to visualise a Plan

Created a TestPlan method which will take position of in editor placed markers for FUP/SBF and create a Plan object to be visualised

RESULTS:

The implementation of PlanningParameters works very well and effectively hides the underlying code from any planner object (thus removing a lot of complexity from the ‘front end’.

The visualisation is still quite simplistic but gives a basic idea and is quite easy to extend. An example is below:

TODO:

Modify graph to store the function required for calculating a viewshed impact on the cost field. This can be an anon function passed as part of the planning parameters (Use of delegates etc. required)

TIME SPENT:

3 hours (60 hours running total)

Time spent on edited addition:

2 hours (62 hours running total)

EDIT:

Began development of presentation for initial CDF Seminar on Friday.

Implemented graph of terrain nodes with connected edges

Implemented cost field calculations

Implemented vector field for complete graph given a target location (which is friendly position in this instance. This means each point on the terrain can navigate back to the friendly position – or in other words a path can be created to any position on the terrain from the friendly location)

RESULTS:

Graph creation:

27×27 grid (729 data points) : 15ms

139×139 grid (~19k data points) : 407ms

278×278 grid (77k data points) : 1665ms

1390×1390 grid (1.9 million data points) : 49663ms

1000×1000 grid (5k by 5k grid with 5m sampling) : 24062ms

Graph and cost field/vector field creation with one path tracked:

27×27 grid (729 data points) : 34ms

139×139 grid (~19k data points) : 635ms

278×278 grid (77k data points) : 3270ms

1390×1390 grid (1.9 million data points) : 101448ms

1000×1000 grid (5k by 5k grid with 5m sampling) : 48959ms

Output demo visualised below. Note the avoidance of enemy line of sight and fastest path given terrain undulations.

The pathing is quite efficient too. A test with a 1000×1000 grid generated (5k by 5k grid with 5m sampling, total generation time of 42841ms) with 100 000 paths subsequently requested was conducted. The total time spent on the path requests was 12559ms. A test with 1 million path requests in the same scenario took a total of 113149ms.

It should be noted that while it is a good result for time for number of path requests, given the preprocessing there isnt even a need for a path request until the final locations have been determined. All the information required for planning (which would otherwise require a pathing request with a different pathing solution) is present in the cost field.

Edit:

Continued work on testing. Fixed one bug with stack path being returned. Now runs well in real time. Example visualisations below.

TODO:

Implement data classes for core element variables. eg. The viewshed can take in viewer height etc. The Edge class calculates cost based on viewshed values (eg. currently more than 5m clearance will put a 0.1 modifier on cost, 1.5-5m will put a 1x modifier on and less than 1.5m will put a 10000x modifier on.

Create a planning data class which is the overarching container for all the preprocessing classes already created. Implement the above data classes (eg. ViewshedConstructionInfo) into the planning data class. Use this class as the main interface for all future operations

TIME SPENT:

7 hours (54 hours running total)

After edit:

3 hours (57 hours running total)

DISCUSSION:

Moving forward. We are now finally at a point where the major preprocessing requirements have been implemented! This means it is (finally) time to start looking towards solving the actual problem. I will definitely revisit the stuff I have done so far in order to provide relevant helper methods etc that things like the fitness function may require but the heavy lifting is now done!

The only other addition that falls under the ‘preprocessing’ step would be to identify and store FUP and SBF candidate locations however that will evolve as we go through the planning process.

Optimisation reflection. It appears as though the graph creation may be a bottleneck much more than the viewshed calculations. This was somewhat of a surprise however is pretty clear with a bit of consideration.

I have heard someone say that ‘premature optimisation is the root of all evil’ or alternatively, get it working then get it working fast. This is a great example of why they are reasonable statements. My early thoughts on how I could optimise the viewshed calculation involved multithreading parts to get up to a 75% reduction (pure best case) in viewshed creation time. In this instance though the viewshed is a fraction of the total preprocessing time. This means that what would have been a somewhat complicated endeavour (and added in more potential bugs and a fair bit of development time) would have ended up providing negligible benefit overall. The optimisation thoughts are still valid and it definitely has merit for use in something that required real time viewshed calculations but for now, it is not something I will pursue. I will potentially look at optimising the graph creation process down the line though.

Continue development of 3D viewshed implementation.

CONDUCT:

A fair bit of bug hunting but less than before!

Previous partial implementation extended to full implementation for 2D height grid.

RESULTS:

Viewshed fully implemented for 2D height grid (Not strictly speaking 3D as it only considers ground height and not above ground obstacles such as overpasses etc but as 3D as we require)

10.5 million data points calculated in < 6 seconds.

Query from created viewshed includes checking if a point is visible (defaults to small tolerance at ground level) and if a point is visible if a target is a certain height above the ground. Due to the way the algorithm works, there is no difference in time between these queries and multiple queries of different heights can be made with no requirement for additional processing.

TIME SPENT:

2 hours (47 hours running total)

DISCUSSION:

Well that feels bloody great!

A major component of the preliminaries done; integrating with a flowfield pathfinding implementation should be comparatively easy now then on to the plan development!

I will put a more technical post up in future with the details of the implemented algorithm but for now…. A pretty solid sense of satisfaction.

Continue development of 3D viewshed implementation.

CONDUCT:

Development and implementation of viewshed algorithm for NE quadrant

A lot of bug hunting

RESULTS:

Fully functional 3D viewshed creation for North East quadrant from viewer location.

TIME SPENT:

4 hours (45 hours running total)

DISCUSSION:

The previous day’s work has paid off; while the algorithm implementation is still far from ideal, it is functional, fast enough and accurate enough for the requirements of this problem.

It is a good morale boost to get working and the extension to the remaining quadrants should be relatively straight forward as it is largely changing signs and reversing some calculations.

There is no doubt a more elegant way of doing this than hardcoding separate methods for each quadrant however it IS a way I can get it working well and once it is done I can move on to implementing the pathfinding, then (finally) getting in to the development of the actual planning!

This requires a bit more work from the caller (identifying the values for the rounded down and up coord) however it also means it can be called by the Bilinear Interpolation function as required. This requirement wasn’t needed in the final version however it is remaining this way in order for other elements to be able to call it from a 2D context (as opposed to only requiring a 1D array)

TIME SPENT:

3 hours (41 hours running total)

DISCUSSION:

A nice quick one tonight; the mental redevelopment of the viewshed implementation led me to the (rather obvious) realisation that I should approach it the way you are taught to approach programming problems from pretty much day 1… Break it down into discrete bits. As a result I have decided to create static helper functions that handle interpolation for me. The Bilinear function can be passed a non integer coordinate and the 2D array of values and will handle all the calculations. This means not only will the code that requires this be MUCH cleaner, it will save on code duplication (eg. if interpolation is required when checking FUP width at a set orientation etc).

Almost more importantly than the above, it means I can test this part of the logic in isolation. It would be very difficult to effectively test the viewshed as a whole (and identify the source of bugs if they were found!) however now I can be confident that the interpolation calculations are correct. This was easy to test in isolation using random data and confirming the data using my own calculations supported by the online calculator at http://www.ajdesigner.com/phpinterpolation/bilinear_interpolation_equation.php

This is a good result as it isolates one of the more complex elements of the viewshed calculation. Now I can focus on the logic inside the viewshed algorithm without worrying about more intricate maths.

Bug hunting and fixing in an effort to get the 3D implementation working

RESULTS:

I minimised the map sampling positions so there was a grid of 4 x 4 samples. This led to a few code fixes and I returned to the larger sample size. I found a few other logic errors in the maths and made some fixes that came tantalisingly close to the required output however there were a few noticeable errors in an area of the map I used. Further digging revealed an underlying error with the logic (and indeed entire implementation).

In my haste to get the code working I inadvertently moved away from the idea of calculating in concentric squares outwards and instead used a more traditional nested for loop (iterating through x and y coords). I thought this would be reasonable as the previous coordinate has been checked however it ran into significant issues on the edge of terrain; large swathes of terrain that should be visible wasnt. This will require a pretty sizable refactoring of the code.

TIME SPENT:

5 hours (38 hours running total)

DISCUSSION:

As the title of this post suggests, it almost felt like I was robbed… being so close but reaching the conclusion that I have to step right back and reimplement the lions share of the logic again from scratch.

Thinking poetically for a moment though… Many trees including Paperbark and Eucalyptus thrive after a bushfire with new growth, the fire that destroys the old growth sets the conditions for new growth to thrive. The code I have (through not enough planning and an eagerness to get it functional) has become quite messy. Between ad hoc debug statements, commented out code and general poor flow, gutting the calculations to start again from scratch is a blessing in disguise. Indeed, it wont even be from scratch; the linear code works well and I am confident in a more streamlined approach to implementing the core diagonal will work well. That combined with (much) more rigorous planning prior to coding the main quadrant parts (and paper based testing to confirm the logic prior!) should mean the final product is a lot slicker and is cause for a lot less headache in the implementation phase.

After a long day though, its time to get the footy on the tele!

As outlined previously, the 3D implementation was suffering from a bug which caused an index out of bounds exception when attempting to implement the quadrants. The linear part was being implemented fine however it was not scaling up in O(n) time as expected (MUCH worse in fact which led to significant concern for the algorithm as a whole)

AIM:

Continue work on 3D viewshed implementation.

CONDUCT:

Reviewed index out of range bug for 3D viewshed

Conducted benchmarking of algorithm in current form

RESULTS:

The bug causing the array index exception was located and corrected (I believe at least!). The algorithm itself is still not calculating the quadrant heights correctly (ie. The logic inside it is still failing) however it IS executing over the correct range now at least. The culprit was an error in calculating the intersection point for coordinates on an angle (Not using the inverse of a value when I should have, resulting in very large numbers when they should be ratios).

This was fixed through the use of UnityEngine debug log statements; the engine maintains a console where the debug statements print out. It is a very handy method for debugging however a major drawback is it is quite slow. Not an issue for single statements but where a statement is inside a loop which is executing many times, it causes significant slowdown. I realised I had rather foolishly left the debug statements in while benchmarking it. Commenting them out led to much better results…

I was also able to run calculations over the entire grid (as opposed to just the X and Y linear line of sight from viewer location as required previously.) which is good as it is more representative of the algorithm running time as a whole (correcting the math logic errors should have negligible impact on the magnitude of the running time)

The results obtained were:

16.6k calculations (1m spacing on a 129×129 grid) – 25ms

66.5k calculations – 74ms

1.66 million calculations – 1646ms

Note the last test would correspond to a a ~1.3km square terrain with heights sampled every meter or over 6km square with 5m sampling! This is great news as it means the current algorithm (when correctly implemented) will work well within the problem scope!

From here…. fix the error in the algorithm (downsize sample to be able to work by hand)

TIME SPENT:

2 hours (30 hours running total)

Updated time spent:

3 hours (33 hours running total)

Update:

More work continued on the implementation of the 3D algorithm. Progress is being made but there are still bugs to work through. Still…. even if the bugs always tinge it with disappointment… slow progress is encouraging.

The 3D implementation of the viewshed is something that needs to be completed prior to integration with pathing solutions which itself is a prerequisite for the development of the overall problem solution.

AIM:

Work on 3D viewshed implementation.

CONDUCT:

Developed 3D viewshed visualisation and related interaction

Attempted implementation of 3D viewshed algorithm

Algorithm complete has bugs

Fully functional along the X and Y axis of the viewer

Capabilities the same as the linear implementation

RESULTS:

Visualisation works well. The Unity profiler allows observation of CPU time spent in each activity so it is possible to determine timing for each element of the program – this means the viewshed calculation time can be determined accurately.

The 3D algorithm has a bug in the calculation of the intersection point on angles. This is something that should be able to be worked through fairly quickly when I next pick it up.

The algorithm works fine in a line along the X and Y axis of the viewer. The implementation here is only a minor extension of the linear version. The results were good.

The above image was a 129x129m terrain with heights sampled every 2m.

This results in a total of 4225 sample points. The visualised viewshed is 130 points and the processing time to calculate is ~ 5ms (Only linear calculation done as per visualisation so only ~130 calculations.

When the sampling was reduced to 1m samples (16.6k points), the processing time ballooned to ~170ms. This seems reasonable however the code to calculate the points outside the linear X and Y axis from the viewing location was not run. This means only 258 calculations were run (This result is very concerning!). Obviously this isnt going to be constantly recalculated however it may need some optimisation or further research. It does seem strange as the algorithm appears to be O(n) but the timings above dont agree with that.

When the sampling was extended to 4m samples the CPU time was ~1ms or less.

Algorithms for viewshed calculations have been reviewed on and off over the past several weeks. Based on this background reading, I have been developing the basic concept of an algorithm that only refers to the neighbour points in calculating viewability from a location. I wanted to test out the concept before looking at extending it to 3D.

The general concept was taken from the below reference

Develop base algorithm for viewshed calculation in order to inform planning for 3D implementation

CONDUCT:

Test scene created in Unity using cubes in a side on view (2D)

Extracted data from scene (cube heights in y axis) to provide initial conditions for algorithm

Implementation of viewshed algorithm

Added observer height and observation tolerance (max viewshed height to be considered out of Line of Sight – eg. If a point is 1cm below view it can still be considered ‘in view’)

Implemented visualisation in Unity and randomised location demo

RESULTS:

The algorithm in a linear sense is basically as follows:

Given a set of heights based on an x position (eg. x = 0, y = 1.3, x = 1, y = 1.6 etc)

Create an array of heights with the index of each array element referring to the x position

Create an equal sized array of Viewshed heights (Corresponding to the height something must be at to be visible from the viewing location

Given a viewer location (x position)

Set the viewshed height of that index and those either side of it to 0 (ie. you can always see the point you are on and the neighbour points

Moving to the right (increasing x position value)

Calculate gradient at previous position = rise / run

RISE = (height of previous position + viewshed height at previous position) -height at viewers location

RUN = Absolute value of (viewerLocation index – previous location index)

Given that gradient

Minimum height to be seen at current index = height at previous index + viewshed height at previous index + gradient

(The above is due to us moving ONE unit at a time on the X axis)

IF height at current index is greater than minimum height to be seen at the current index, set viewshed height at current index to 0 (We can be directly seen)

OTHERWISE set the viewshed height at current index to minimum height to be seen at current index – height at current index

This works well as can be seen in the gif below

In 3D the viewer point will be in an arbitrary position in an X Y plane. My initial thoughts are the exact same (linear) algorithm can be used in the cardinal directions (ie. ‘North/South’ being directly along the Y axis, ‘East/West’ being directly along the X axis from the viewer position only). A very slight modification can also be used for the NW/NE/SW/SE directions as well.

The algorithm will need to be amended for all other cells however; It will not be possible to directly check a previous location’s height and viewshed height. This is not a significant complication though; it is a relatively straightforward process to interpolate between the two points closest to where the ‘previous neighbour’ would be (along the Y axis for locations in the North and South quarters and along the X axis for locations in the East and West quarters). This will involve a weighted average between the two points with the weighting based on the relative distances on where the exact neighbour should be (eg. If the neighbour location would be 4.2 on the x axis, the values from the point at 4 would be weighted at 80% compared to the values at the point at 5 on the x axis with a weighting of 20%)

One significant outcome of the above is, once the cells in black above are calculated with the simple algorithm, each of the 8 subsections created can have their values calculated independently of the other sections. This opens up potential for a multi threaded approach which should be relatively straight forward (each segment is separated from the others and has no interaction with other elements) and will be an increase in efficiency.

TIME SPENT:

Original: 4 hours (24 hours running total)

From update below: 5 hours (29 hours running total)

Update:

Work continued on the above throughout the day. The basic linear viewshed class was refined and polished to a standard I am very happy with. The general algorithm for 3D was fleshed out and it was implemented in a Viewshed3D class. This is untested as yet as I will need to alter the visualisation scripts somewhat to effectively see what is happening ‘under the hood’. I ended up just breaking the space into quadrants (separated by the X and Y lines from the viewer location) and this provides a much easier implementation.

Work was also started on drafting the final report; I will start fleshing parts out now (especially the background/planning elements).

Moving forward the intent will be to add more focused (hopefully smaller) posts. Effectively each post should be constrained as much as possible to one specific area. This may mean that a long session of work/research may have to be split up into separate posts however I believe the additional effort will pay off in future. When I am trying to look up previous work in a certain category, it will ensure the majority of each post with that tag is relevant, as opposed to returning posts with a small bit on the category I am looking for buried amongst other elements of the problem. A ‘weekly summary’ or similar may still be worthwhile.

AIM:

Develop options for representing the solution data into the Evolutionary Algorithm (EA).

CONDUCT:

Ongoing discussion and ‘thought experiments’

RESULTS:

The general solution will consist of a Form Up Point (FUP) and a Support By Fire (SBF) location. The pathing to these locations will be handled by the pre calculated flow field so the solution encoding only requires the two locations. The problem then becomes how to represent these locations.

Options:

No pre processing of solution space:

Every single point in the terrain data can be considered for both FUP and SBF regardless of suitability (eg. Points in the open in front of the enemy position)

Each location represented by 2 integers corresponding to coordinates

eg. FUP (2, 17) and SBF (45, 32) represented as (2, 17, 45, 32)

Crossover has 4 options

Mutation can move one of four integers left/right

Somewhat limited

As above but locations are represented by n bit binary representations of each coordinate (n based on the size of the largest dimension)

eg. Using same FUP/SBF locations in 8 bit representation:

(00000010, 00010001, 00101101, 00100000) or

(00000010000100010010110100100000)

This solution offers significantly more crossover options ranging from the same as previous (keeping the 8 bit representations as one ‘whole’ bit to crossover) to smaller resolutions (eg. each integer has two four bit parts which can be used for crossover)

More importantly, it offers significantly better options for mutation: a random bit can be swapped between 1 and 0 which will result in a (potentially) large mutation due to the binary representation. This can be controlled; more significant bit mutation represents a larger solution mutation whereas a less significant bit will correspond to a smaller overall mutation.

Notable issue: This method can result in child solutions that lie outside the solution space. eg. If the locations are represented by 8 bits (unsigned) then the maximum value is 255. If the actual terrain is smaller than this (eg. the 100m x 100m test terrain) a mutated bit (or unlucky crossover) could result in a coordinate outside the terrain. This can obviously be checked after mutation or have the resulting child ‘die off’ immediately however it is still potentially wasted processing time.

Pre processing of solution space.

During viewshed calculation, FUP and SBF solution candidate lists are populated based on that point’s viewshed result. This will remove tactically unsound points from the candidate solution pool prior to the population of the EA.

Each location represented by a SINGLE integer which corresponds to the index of the location in the candidate list.

eg. FUP (2, 17) and SBF (45, 32) may be represented in the candidate lists as candidateFUP[3] and candidateSBF[9] thus they would be represented in the EA as (3, 9)

This results in minimal crossover options (one from each parent)

This also results in minimal mutation options (incrementing or decrementing one of two integers)

As above but each integer represented by the binary representation of n bits (n based on the total size of the largest array)

Using the above example, the encoding would be (in 4 bit encoding)

(0011, 1001) or (00111001)

This offers the same advantages as per the previous binary representation without pre processing however it also suffers from the same issue: if the total size of the largest array is 70 then the encoding will be 7 bits. This can still result in indexes evolving that are outside the array length (especially notable if one array is much smaller than the other)

The above points are still under consideration however at this stage I am leaning towards the final one (pre processing and binary index representation).

The viewshed pre processing is required for pathing calculations regardless and it seems to be a significant waste of effort to not take advantage of that step to prepopulate solution candidate lists at the same time. This will be negligible computational cost (merely the additions to the lists) and save potentially significant time spent outside the tactically sound solution space (which is effectively outside the solution space anyway).

The additional processing required to confirm the mutation remains in the solution space and either kill it off or amend it if it doesn’t is likely to be much less than the additional computations spent outside the solution space without preprocessing.