• 4K Camera Now Working

    Capt. Flatus O'Flaherty ☠07/22/2019 at 15:48 0 comments

    It's only taken me 2 months to work out how to get the camera working without buying a 4K monitor, mostly thanks to a reply on the Nvidia community forum, which is pretty fantastic.

    #import "cudaResize.h"
    CUDA(cudaResizeRGBA((float4*)imgRGBA, camera->GetWidth(), camera->GetHeight(),
              (float4*)imgRGBA, texture->GetWidth(), texture->GetHeight()));   

    Place the above before "CUDA(cudaNormalizeRGBA()" in the draw section at the bottom of the main loop. In the section near the top where the code creates the display and texture, either set your texture size to a custom value or divide it by an amount that brings it into the size of your display properly. I divided the camera size by 2 for my needs.

    texture = glTexture::Create(camera->GetWidth()/2, camera->GetHeight()/2, GL_RGBA32F_ARB/*GL_RGBA8*/);

    The camera frame now needs to be split up into 6 grids for the new resolution and calculations made to take account of the perspective every time the camera is moving to a new position.


  • A Question of Perspective

    Capt. Flatus O'Flaherty ☠06/18/2019 at 15:51 0 comments

    I spent a bit of time taking about 1,000 photos of some yellow plastic discs I had lying around to use as simulators of grids of plants in the workshop rather than out in a field.

    This proved to be a great investment and has made testing the machine much easier.

    As can be seen from the above, my current monitor wont allow full display of the high resolution camera images, but nonetheless shows the issue with perspective quite nicely. Looking at the above, we might think that the grid was nice and regular and giving good results, but, as below, I had to jiggle the back row of discs to compensate for the perspective:
    It shows that further calculations need to be made within the Jetson software stack to allow for perspective, which would completely throw off the navigation results if one of the plants were missing in the extremities of the grid. It's a nice challenge for the old brain cells!

  • Detection boxes coalescing solution

    Capt. Flatus O'Flaherty ☠06/09/2019 at 17:10 0 comments

    BEFORE:


    AFTER:

    detectNet.cpp line 446:
    BEFORE:

    inline static bool rectOverlap(const float6& r1, const float6& r2)
    {
        return ! ( r2.x > r1.z  
            || r2.z < r1.x
            || r2.y > r1.w
            || r2.w < r1.y
            );
    } 

    AFTER

    inline static bool rectOverlap(const float6& r1, const float6& r2)
    {
        // The rectangles do not overlap at all
        if ( r2.x > r1.z ||          r2.z < r1.x ||
             r2.y > r1.w ||
             r2.w < r1.y ) return false;
    
        else     {
            float overlap_x = std::min(r2.z,r1.z) - std::max(r2.x,r1.x);
            float overlap_y = std::min(r2.w,r1.w) - std::max(r2.y,r1.y);
            float area_overlap = overlap_x * overlap_y;
            
            float area_r1 = (r1.z-r1.x) * (r1.w-r1.y);
            float area_r2 = (r2.z-r2.x) * (r2.w-r2.y);
            
            // The rectangles overlap by less than 75% of either's area
            if ((area_overlap < 0.25*area_r1) &&             (area_overlap < 0.25*area_r2)) return false;
            
            // The rectangels overlap
            return true;
        }
    }

  • Detection boxes coalescing

    Capt. Flatus O'Flaherty ☠06/09/2019 at 12:43 0 comments

    The plants are much larger now at this time in the season and there's a problem with the detection boxes coalescing:

    I vaguely remember seeing something somewhere in some code that could possibly stop this happening .... but where / how?

  • Now Works in Bright Sunshine!

    Capt. Flatus O'Flaherty ☠05/14/2019 at 09:07 0 comments

    After adding about 1,000 'labels' as described in the previous log, rather surprisingly, the detection now works very well in bright sunlight with strong shadows:

    It can be seen that the blue detection boxes are tighter on the plants than before. Here's the training graph, 4 hours on AWS Px2 xtra large:
    It does not look particularly impressive, but the training did seem to need 300 epochs to add the new features. Next step is to upgrade the camera to Logitech Brio 4K, which has 4x the pixels of the C930E and test on the WEEDINATOR robot.

  • Labels / NOT / Images !

    Capt. Flatus O'Flaherty ☠05/13/2019 at 11:50 0 comments

    It's all about the number of labels, not the number of images. A proportion of the images should be close up, high resolution, but, quite possibly, a large number can be lower resolution, so I decided to include photographs of the seedlings in groups of 9 as below:

    This reduces image processing times in that we get 9 labelled objects for one cropped photo and labeling itself is quicker as 'selecting image time' is reduced.
    The new, lower resolution images can now be uploaded to DIGITS and we can test the deployment success on the actual seed beds as before and check for false positives etc.

  • First Real Time Deployment

    Capt. Flatus O'Flaherty ☠05/10/2019 at 14:40 0 comments

    On a relatively small dataset of just 2064 images, we're already getting good results detecting swede plants. The boxes are not tight on the crops yet and this can probably be cured by adding a load of null images of the soil. Shadows are also a problem and additional images will probably be added with shadows to counter that.

  • First Training Results

    Capt. Flatus O'Flaherty ☠05/09/2019 at 18:35 0 comments

    After uploading 2067 image sets to AWS, training results look very promising, with good convergence at 100 epochs. Total time to rain = 2 hours. I took the trouble to photograph the swedelings at different ranges, so that some will have intricate detail, whilst others will not.

  • First Seedlings Planted

    Capt. Flatus O'Flaherty ☠04/21/2019 at 18:05 0 comments

    350 swedelings have been planted. The weather is dry and hot. Each plant is exactly 11" apart to match the weeding pattern of the robot. A giant wooden set square and carefully placed string lines are used for positioning.


    Next to plant are onions - something more challenging for the computer vision.

  • Eradicating Straw

    Capt. Flatus O'Flaherty ☠04/18/2019 at 12:31 0 comments

    From experiences using computer vision last year, some of the cameras got very confused by bits of dry vegetable matter, particularly long thin bits, or 'straw' lying on the surface of the soil. The previous log shows a very scrappy plot, mainly due to this straw being turned over near the surface rather than buried. A pass with the plough turns the soil over to about 8" depth and should help bury the rubbish. The test plot is now left to dry out and any remaining weeds to get blasted by the strong sunlight we're getting at the moment: