Close
0%
0%

Elephant AI

a system to prevent human-elephant conflict by detecting elephants using machine vision, and warning humans and/or repelling elephants

Similar projects worth following
The conflict that arises between humans and elephants in countries such as India, Sri Lanka, and Kenya, claims many hundreds of human and elephant lives per year. These negative interactions arise when humans meet elephants on their trails, when elephants raid fields for food, and when elephants try to cross railways. Machine vision and automated deterrence can mitigate such conflict.

ELEPHANT AI SYSTEM


Here's a talk I did about the project


INTRODUCTION AND GOALS

This is an evolution of my 'Automated Elephant-detection system' that was a semi-finalist in the Hackaday Prize 2016. The current project differs substantially in that it makes use of more advanced machine vision techniques, and eliminates the usage of RF communication and village base stations. Alternatively using 4G/3G/EDGE/GPRS on each elephant-detection device, and includes elephant-deterrence devices to completely eliminate interaction between humans and elephants whenever possible.

* Thanks to  www.nerdycute.com for drawing our logo!

So, let's get to the primary goals of Elephant AI:

  • Eliminate contact between humans and elephants
  • Protect elephants from injury and death
  • Protect humans from injury and death

How will the Elephant AI accomplish these goals?

  • Detect elephants as they move along their regular paths. These paths have been used by elephants for many years (perhaps centuries) and often cut through areas now used by humans. Humans will be warned that elephants are moving on the paths so they can stay away or move with caution.
  • Detect elephants as they leave forested areas to raid human crop fields. At this point, elephant deterrence devices will attempt to automatically scare elephants. This will be using sounds of animals they dislike (e.g. bees and tigers, and human voices in the case of Maasai people in Kenya/Tanzania), and perhaps by firing chili balls into the paths of the elephants from compressed air guns.
  • Detect elephants before they stray onto railway lines. This can be done via a combination of machine vision techniques and more low-tech IR (or laser) break-beam sensors. Train drivers can be alerted to slow-down and stop before hitting the elephants who are crossing.

Just how bad is it for humans and elephants to interact? This video, shot several months ago, in India, gives some idea. It is really bad indeed. It causes great stress to elephants, and puts both the elephants and humans at risk of injury or death.

That's why Elephant AI wants to take human-elephant interaction out of the equation entirely!

HARDWARE SETUP

We need a daylight camera (IR-filtered) and a night camera (NoIR filtered + IR illumination array) since elephants need to be detected 24hrs per day! In my original project I completely forgot about this, then decided to multiplex cameras to one Raspberry Pi. It was actually cheaper and easier to use two raspberry pi's; each with its own camera. Night-time and daytime classification of elephant images both need their own trained object detector anyway, so I don't think it's such a bad solution (for now).

METHODS FOR IMAGE CLASSIFICATION (older approaches)

This is the main part of the project. In my original automated elephant detection project I'd envisaged just comparing histograms!! Or failing that I'd try feature-matching with FLANN. Both of these proved to be completely rubbish in regard of detecting elephants! I tried Haar cascades too, but these had lots of false positives and literally took several weeks to train!

Initially with ElephantAI I worked with an object detector using Histogram of Oriented Gradients (HOG) and Linear Support Vector Machines (SVM). That had promising results; giving only 26% false-positives with a dataset consisting of 350 positive elephant images and 2000 negative non-elephant images (see https://hackaday.io/project/20448-elephant-ai/log/57399-4-result-for-object-detector-using-histogram-of-oriented-gradients-hog-and-linear-support-vector-machines-svm) and I would expect improved results with larger datasets. And it did. I got a result of 16% false-negatives with 330 positive elephant images and 3500 negative non-elephant images (see result #5)

At present, I am working on differentiating between types of elephants using deep convolutional neural networks for image classification vs. classical machine-vision techniques I had...

Read more »

  • 2 × Raspberry Pi 3 Model B [detection device dayPi and nightPi] £32
  • 1 × Raspberry Pi Camera Module v2 (8MP) IR-filtered Standard [detection device] daytime usage dayPi [£29]
  • 1 × Huawei E3531 2G/3G USB dongle £21
  • 1 × Case for elephant detection device For prototype we used: IP65 320 x 250 x 135mm Moulded Grey Sealed Adaptable Box, Weatherproof Enclosure with Lid (£25.00)
  • 1 × Case for elephant deter device For prototype we used: IP65 220mm x 170mm x 80mm Moulded Grey Sealed Adaptable Box, Weatherproof Enclosure Lid (£11.28)

View all 31 components

  • RF Network

    Neil K. Sheridan11/04/2017 at 21:56 0 comments

    So if you recall from Automated Elephant Identification project, we had the concept of setting up RF comms between the elephant detection devices and field transmission stations. The elephant detection devices using XBee 2mW Wire Antenna - Series 2 (ZigBee Mesh) to communicate with the field transmission station. The field transmission station would use a long-range RF modem to communicate with the village base station.

    The setup being very basically outlined here:

    So we develop that idea a bit more now, and go to testing it.

    We are setting up a network. It's arranged in a mesh topology. Although kinda tree topology if not for the deter devices. 

    There are three device types in the network:

    1. End device
    2. Router
    3. Coordinator 

    Anyway, so far as I can see, we can go ahead with using the ZB SERIES S2C - 2MW WITH WIRE ANTENNA (range 120m line-of-sight / $23) and setting these as either end device, router, or coordinator. 

    Example starting point for network topology:

    So that's quite good, because we can have each elephant detector 120m from the router, and then the shared coordinator can be 120m from each router. That gives us quite a wide area! And we don't need Bluetooth anymore for the deter devices, so we are more flexible with their locations relative to the elephant detection devices!

    Field Transmission Station:

    This is the coordinator for our ZigBee network. This is at the root of the tree, and starts the network. It also contains the security keys. So most importantly, as the name Field Transmission Station implies, it will bridge to another network. It was thinking of using the TX6000 LONG RANGE 20KM TRANSMITTER & RECEIVER to do this at up to 500mW RF power output. (£196).

    So we can transmit 10-20km line-of-sight from this. Hopefully that is far enough to reach somewhere with wired internet access, or somewhere with 2G/3G coverage! If not, we can use a repeater, giving up to 40km transmission range.

    Initial testing:

    1. First I'll set up a network of 1x elephant detection device, 1x deter device, 1x router, and 1x coordinator. Instead of using the costly long-range TX/RX device, I'll test it by adding 3G connectivity to the coordinator.
    2. Next, I'll try with making each of the trees: so that's 1x detection device and 1x deter and 1x router for first tree, and 1x detection device and 1x deter and 1x router for the second tree. And the coordinator again with 3G connectivity for testing. 

    Protocol stack:

    It builds on top of  IEEE 802.15.4 with some additional layers, including for security. See wikipedia: https://en.wikipedia.org/wiki/Zigbee // I'll do a diagram when I actually get going!

    Conclusion:

    There are some alternatives to ZigBee, with different protocol stacks, perhaps easier to work with[?] but these ZigBee modules seem a lot cheaper!

    I envisage that these RF networks might be fairly useful for camera traps too! So researchers can set up a topology to get the images back to their base of operations easily, and without having to physically visit the traps. Hence, the camera traps remain free of the off-putting human smells!

    Resources:

    http://www.science.smith.edu/~jcardell/Courses/EGR328/Readings/Zigbee%20ACaseStudy.pdf

  • Building a daytime cat-detector: part 2

    Neil K. Sheridan10/29/2017 at 20:21 0 comments

    ( u n d e r    c o n s t r u c t i o n )

    So, the cat-detector was tested in better lighting conditions! Remember, we are building a cat-detector to test out image classification using TensorFlow on the Pi, and to investigate some of our future directions for elephant detection too. For instance, we can test out tweeting our labelled cat images, and asking twitter users to add their own labels (i.e. the names of the cats). I.e. supervised learning. Much of this post will cover that.

    1. Let's see what happened with the cat-detector in better lighting conditions!

    Oh no! Well the largest component of the image is actually very similar to a prayer rug however!

    Prayer rug again!


    Here the cat-detector camera is elevated about 10cm off the floor. At least it got an animal this time. And LaLa's ears do resemble the shape of mini pincher's ears, to be fair!

    Here are some more results. Not very good really! The top guess being a Bernese mountain dog!

    2. What can we do to improve the accuracy of our cat-detector?

    Honestly, I don't think we can get much better images acquired for cats than these at their feeding station! They are not going to come and pose for the camera! So what can we do then?

    As I was saying earlier, we can start working on our own cat-detector to detect individual cats (e.g. LaLa and Po). You can see how hard it would be based on the images above (Po is first, then LaLa). They look really similar! Maybe we could look at fixing a camera so it gets their faces when they are eating out the bowl? That would be useful if we wanted to apply the cat-detector to entry control (i.e. cat doors) too!

    So:

    -- Fix camera so it can acquire images of cat faces when they are eating

    -- Acquire up to 1000 cat face images for each cat. Well, let's get 200 each and go with that first. So just have a Pi taking images when the PIR gets triggered and label them LaLa or Po manually. 

    -- Then we can go about training a CN using some of the kernels from the Kaggle Dogs Vs. Cats Redux for inspiration -> https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition/kernels

    I know some of them use Keras e.g. https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html

    -- Then we can try out our new cat-detector!

    3. Anyway, let's add code to tweet out the results from the cat-detector now!

    Here's the general code we need to add:

    from twython import Twython
    
    C_KEY = ''
    C_SECRET = ''
    A_TOKEN = ''
    A_SECRET = ''
    
    twitter = Twython(C_KEY, C_SECRET, A_TOKEN, A_SECRET)
    photo = open('image1.jpg', 'rb')
    response = twitter.upload_media(media=photo)
    twitter.update_status(status=message, media_ids=[response['media_id']])

    You can obtain your C_KEY, C_SECRET, A_TOKEN, and A_SECRET from twitter. Here's how to do that: https://twython.readthedocs.io/en/latest/usage/starting_out.html

    So next we put our detector results in the message string variable (status=message)

    message=("I have spotted: " + top_label + " (" + top_result + ")")

    Ok, so we are good to go with twitter now! Now we could try the idea of asking twitter users to add their own labels to the images. We could ask them to reply with #catdetector #lala if the image was of LaLa (large black and white cat), and ask them to reply with #catdetctor #po if the image was of Po (small black cat) !

    ** so I will hang back on this next bit, and just try labelling them myself first, since I don't have a VPS ready/code ready.

    The entire idea of this is to test the approach to supervised learning I envisaged for elephants. So in this case, I off-load the work of labelling images to twitter users. They reply to cat images with the hashtags above. Then I associate these replies with given cat images, store them in a database. Then when I have enough, I will go ahead and retrain InceptionV3 with two new classes: PoCat and LaLaCat! I'll do some it manually to start, then we can automate...

    Read more »

  • Building a daytime cat-detector: part 1

    Neil K. Sheridan10/28/2017 at 17:13 0 comments

    Here's my first software/hardware build guide for the cat detector (as I promised a while back) to help develop and test the hardware and code required for the entire elephant detection system! First we do the daytime cat-detector, then we do a night-time cat-detector with the IR LED illumination and NoIR camera. Then we can connect them together, and add a light sensor to switch!

    Here's the entire series of build instructions I'm going to post:

    • -- Building a daytime cat-detector: part 1
    • -- Building a daytime cat-detector: part 2 (lessons learned)
    • -- Building a night-time cat-detector
    • -- Building a day and night cat-detector with light-sensor switching

    Ok, so first off, let's start building the daytime cat-detector!

    1. First of all let's build the hardware components! So we need the Pi Camera, and we need a PIR. Luckily we covered that earlier https://hackaday.io/project/20448-elephant-ai/log/67655-buildinstructions-for-hardware-pir-and-camera , so you should be familiar with setting those things up! Next, I put them in a small box with holes for PIR and camera. Then I covered it with a cardboard box so that cat's didn't knock it over. Although they still did!

    2. Now, let's add our code we need! First we will add some code to test the PIR, and take photos when it detects something. We'll use this code for that: 

    import time
    import picamera
    import datetime
    import RPi.GPIO as GPIO
    
    def CheckPIR():
        # dependencies are RPi.GPIO and time
        # returns PIR_IS with either 0 or 1 depending if high or low
        time.sleep(1)
        #don't rush the PIR!
        GPIO.setmode(GPIO.BOARD)
        # set numbering system for GPIO PINs are BOARD
        GPIO.setup(7, GPIO.IN)
        # set up number 7 PIN for input from the PIR
        # need to adjust if you connected PIR to another GPIO PIN
        try:
            val = GPIO.input(7)
            if (val == True):
                PIR_IS = 1
                #PIR returned HIGH to GPIO PIN, so something here!
            if (val == False):
                PIR_IS = 0
                #PIR returned LOW to GPIO PIN, so something here!
                GPIO.cleanup()
    
        except:
            GPIO.cleanup()
    
        return PIR_IS
       
    
    
    PIR = 1
    count = 0
    
    
    while True:
        PIR = 0
        #Now to check the PIR and send what it returns to PIR
        PIR = CheckPIR()
        if PIR == 0:
            print("Nothing has been detected by PIR")
        elif PIR == 1:
            print("Something has been seen! Time to photograph it!")
            i = 0
            with picamera.PiCamera() as camera:
                while i < 5:
                    i = i+1
                    print(i)
                    camera.start_preview()
                    time.sleep(1)
                    utc_datetime = datetime.datetime.utcnow()
                    utc_datetime.strftime("%Y-%m-%d-%H%MZ")
                    #get date and time so we can append it to the image filename
                    camera.capture('image_'+str(utc_datetime)+'.jpg')
                    camera.stop_preview()
                    time.sleep(1)
                    if i == 5:
                        break

    GitHub:  https://github.com/nksheridan/elephantAI/blob/master/demo_Take_Photo_when_PIR_high.py


    TENSORFLOW: So we went through installing TensorFlow already, and you should have it at this stage on your Pi. But if not, you can follow this guide https://github.com/samjabrahams/tensorflow-on-raspberry-pi to install the pre-built binaries. 

    Next, you should clone the TensorFlow repository: git clone https://github.com/tensorflow/tensorflow.git .. And then you should download the graph file for InceptionV3 https://storage.googleapis.com/download.tensorflow.org/models/inception_v3_2016_08_28_frozen.pb.tar.gz . This should be unzipped, and placed in the directory: ~/tensorflow/tensorflow/examples/label_image/ along with the labels file that was in the archive.

    Subsequently, it would be good to try just using label_image.py (that should be in this directory), to label/classify an image. I did a video of this here https://hackaday.io/project/20448-elephant-ai/log/69744-video-demo-of-elephant-detection-using-a-raspberry-pi .. And next you could label an image acquired from the Pi Camera. I went through this here ->  https://hackaday.io/project/20448-elephant-ai/log/69772-simple-demo-code-for-taking-a-photo-with-raspberry-pi-and-classifying-it-using-inceptionv3-graph...

    Read more »

  • Simple demo code for taking a photo with Raspberry Pi and classifying it using InceptionV3 graph

    Neil K. Sheridan10/28/2017 at 12:16 0 comments

    Here we can take an image with the Pi camera, and send it to label_image.py.  We went over using label_image.py in this log https://hackaday.io/project/20448-elephant-ai/log/69744-video-demo-of-elephant-detection-using-a-raspberry-pi . This will classify the image using InceptionV3 model graph and labels files. You can download a graph file for InceptionV3 here -> https://storage.googleapis.com/download.tensorflow.org/models/inception_v3_2016_08_28_frozen.pb.tar.gz

    import time
    import picamera
    import os
    
    camera = picamera.PiCamera()
    
    
    print("Capture an image")
    camera.start_preview()
    time.sleep(1)
    camera.capture('image1.jpg')
    camera.stop_preview()
    time.sleep(1)
    print("Send it to label_image.py")
    os.system('python label_image.py --image=image1.jpg --graph=inception_v3_2016_08_28_frozen.pb --labels=labels_incep.txt')
    print("All done!")
    

    Here I ran it whilst pointing the Pi camera at a a banana! And yes, as you can see, it was classified as a banana!

    Our apple, however, resulted in some confusion!

  • Things to do remaining

    Neil K. Sheridan10/27/2017 at 20:12 0 comments

    • Fix build of TensorFlow on raspberry pi. It seems like permissions issue re NSYNC? So should be easy to fix. [this is a real mess tbh]
    • Write a guide for using pip to install pre-built TensorFlow binaries. Some of probs with this can be addressed by building protobuf on the pi. From what I recall it was NSYNC again for most of issues?
    • Modify label_image.py to return variables we want and remove argparse etc. I.e. just write it again [completed]
    • Code for detection devices needs to classify image using InceptionV3 graph first, then if we got >.70 elephant, go to use the graph from InceptionV3 retrained to classify specific class (e.g. lone, herd)
    • Finish up with mounting in boxes [these waterproof boxes are proving difficult. I'm not very good at 3d printing
    • Test as a wildlife detection camera trap. Upload everything we get on PIR==HIGH to twitter with labels and results. So should place it in woods []
    • Test with horses [yes we can do this now, but we need waterproof boxes!]
    • Test with cat-detector https://hackaday.io/project/20448-elephant-ai/log/69784-building-a-daytime-cat-detector-part-1 -- we are using this to test our supervised learning via twitter idea! [it's really bad for cats off-shelf w InceptionV3 so I am freezing this]
    • Test with elephant images taken from Pi camera
    • Go ahead with https://hackaday.io/project/20448-elephant-ai/log/68940-future-directions-and-improvements-to-elephantai so we need to a proof of concept with something local like birds in the garden for the supervised learning via twitter and virtual private machine. [underway]
    • Try RF comms: give a detection device RF comms, and add a Pi with 2/3G + RF comm rx as base station. Try a repeater station too. So that would get RF comms from detection device and send it another few km to the base station! [ok, I am working on this per https://hackaday.io/project/20448-elephant-ai/log/70068-rf-network ]
    • Make a Raspberry Pi HAT for the detection devices. So we don't want breadboards and jumper wires all over the place! So I'm going to try and make a HAT instead. This should be fun!

    Well, I will update as I complete these!

  • Video demo of elephant detection using a Raspberry Pi

    Neil K. Sheridan10/27/2017 at 19:38 0 comments

    Hi, so this Pi has TensorFlow and protobuf built and installed. First I will show it using label_image.py from https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/label_image to classify a panda. Yes, I know. In this case, we use a graph file from InceptionV3 off-shelf.  

    Next, I will use the label_image.py code from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/label_image.py and I'll pass the my graph file from retraining InceptionV3 to this, along with the labels file. See here for the retraining instructions: https://hackaday.io/project/20448-elephant-ai/log/68436-software-elephant-detector-transfer-learning-off-shelf-model-with-tensorflow

    My retraining on InceptionV3 was to add two new classes: herd elephants, and lone elephants. So we can see what kind of results we get for classification of images containing herds of elephants, and lone elephants!

    It takes around 10-15 seconds for the Raspberry Pi to classify the images. That's with 16MB to GPU, and running a GUI. It should be faster if you don't use a GUI. I'll try and get some comparisons later.

    Here's the video! 

    I've uploaded the graph file here <add it>

    And the labels.txt here <add it>

    So you can try! However, this is just from a test retraining run, and I only used 50-100 images per class.

  • Guide to installing TensorFlow on Raspberry Pi

    Neil K. Sheridan10/21/2017 at 18:46 0 comments

    [ u n d e r    c o n s t r u c t i on ]

    This causes a lot of problems, evidenced if you search for issues related to it on the web! And it caused me a lot of problems certainly! So I'm writing this guide to illustrate approaches and things that can go wrong + fixing them.

    Please check you have Raspian "Jessie" as the OS on the Pi first! And that you don't have a small or nearly full SD card.

    I'm going to add screen-recordings of the installations to help!

    USING MAKEFILE TO BUILD TENSORFLOW

    This is from the install guide here: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/makefile

    This is an alternative to using bazel to build TensorFlow. You compile the TensorFlow and Protobuf libraries. Here's what protobuf is if you hadn't heard of it: https://github.com/google/protobuf

    1. Clone the TensorFlow repository to the Pi. git clone https://github.com/tensorflow/tensorflow.git

    2. Run the download_dependencies.sh script that TensorFlow team have written. You can see it here https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/makefile/download_dependencies.sh and you'll have it on your Pi at tensorflow/contrib/makefile/download_dependencies.sh . This will download all the required dependencies. This is one of the ways I got in a mess because I tried to get them all individually, and got in a muddle.

    (e r r o r) So this was the point of the first error! Here is it below! tensorflow/workspace.bzl didn't exist! And if you look at the download_dependencies.sh script, you'll see it wants to use that directory! Ah well this was because the TensorFlow repository has been written to tensorflow/tensorflow instead

    of just tensorflow!

    So to fix this error, we have to edit download_dependenices.sh! Let's go and do it with the sudo nano command. It's in the directory tensorflow/tensorflow/contrib/makefile/ . So we change the lines

    DONWLOAD_DIR= and BZL_FILES_PATH= to the correct directories. See below image, I have changed them to

    /home/pi/tensorflow/tensorflow/ . Now we can run the download dependencies script without an error!

    Here's what you should see when you run the script!

    Great! So that bit is done!

    3. Download the example graph for testing . Here it is: https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip

    4. Now with 

    sudo apt-get install -y autoconf automake libtool gcc-4.8 g++-4.8

    we will get our packages for installation. So the packages are:

    autoconf  https://www.gnu.org/software/autoconf/autoconf.html for auto configuration

    automake https://www.gnu.org/software/automake/ to auto generate makefile.in

    libtool https://www.gnu.org/software/libtool/ support for working with shared libraries

    gcc-4.8 gcc 4.8 compiler https://gcc.gnu.org/onlinedocs/4.8.1/ this is used instead 

    ** note that gcc 4.8 is used instead of gcc 4.9 which is installed on the Pi OS, because 4.9 is known to encounter an error involving  __atomic_compare_exchange.

    g++ 4.8 compiler

    I didn't have any problems with that bit! But it did take quite a while i.e. 30 mins+

    5. Building protobuf:

    cd tensorflow/contrib/makefile/downloads/protobuf/
    ./autogen.sh
    ./configure
    make
    sudo make install
    sudo ldconfig  # refresh shared library cache
    cd ../../../../.. (so back to tensorflow directory)
    export HOST_NSYNC_LIB=`tensorflow/contrib/makefile/compile_nsync.sh`
    export TARGET_NSYNC_LIB="$HOST_NSYNC_LIB"

    (e r r o r) Now we encountered the second error! This occurs when you attempt the first export command! Permission was denied to mkdir! Specifically "cannot create directory tensorflow/contrib/makefile/downloads/nsync/builds/default.linux.c++11 Permission Denied"

    So that's because of the permissions. I went ahead and used chmod so change them!

    ... Read more »

  • #buildinstructions: light-sensor

    Neil K. Sheridan10/20/2017 at 20:33 0 comments

    Let's get started with adding a light-sensor, so we can tell if it is daytime or night-time! We need a light-sensor with a digital output. Or we can build a circuit using a photo-resistor and a transistor instead!

    PARTS REQUIRED

    digital light sensor. Have used "kwmobile light sensor with digital output, photodetector, brightness sensor, light sensor for Arduino, Genuino and Raspberry Pi " which was £5.40

    jumper wires (you can test with female to female, but will need female to male when we share the light sensor between dayPi and nightPi)

    1.

    Connect the jumper wires to the light-sensor:

    2. Now we connect the jumper wires to the raspberry pi:

    Connect digital output (DO/yellow) to GPIO PIN 7 [board numbering system]

    Connect GND to GND

    Connect 5v input to a 5v output PIN on Pi

    3. Now that's all done! We can go ahead and test it using the following code:

    import RPi.GPIO as GPIO
    import time
    GPIO.setmode(GPIO.BOARD)
    GPIO.setup(7, GPIO.IN)
    try:
        while True:
               val = GPIO.input(7)
               if (val == True):
                    print("IT IS DARK CONDITION")
               else:
                    print("IT IS LIGHT CONDITION")
               time.sleep(3)
               GPIO.cleanup()
    except KeyboardInterrupt:
         GPIO.cleanup()
    

     Here it is in action! You can see it has it's own LED that turns on/off depending on light condition.

    4. Ok, we are all set for using the light-sensor now! I'll add the homemade circuit later!

  • #buildinstructions: allowing dayPi and nightPi to share PIR and light-sensor

    Neil K. Sheridan10/20/2017 at 19:26 0 comments

    Here we show how to allow the dayPi and nightPi to share the PIR and light-sensor!

    PARTS REQUIRED:

    • half-length breadboard
    • PIR
    • light-sensor with digital output
    • numerous jumper cables!
    • dayPi
    • nightPi
    • * I used the Adafruit cobbler for testing

    Let's get started!

    1. Let's do the light-sensor first. This has a 5v input, a GND, and digital output (giving HIGH or LOW depending on lighting conditions). We need to connect all of these to independent terminal strips on the breadboard. So for example:

    5v goes to terminal strip 1. So we connect a jumper wire from this terminal strip to the 5v output on nightPi

    GND goes to terminal strip 2. So we connect two other jumper wires to this terminal strip. One will go to GND on nightPi and one will go to GND on dayPi

    Digital output goes to terminal strip 3. So we connect two other jumper wires to this terminal strip. One will go to GPIO 11 on nightPi, and one will go to GPIO 11 on dayPi. [note this is BOARD numbering for the GPIOs]

    Let's see what this looks like with a photo! This includes the wiring for the PIR, which is the same kind of thing but to GPIO 7 on each Pi for the digital output!

    2. Now let's to it for the PIR. In the photo you can see the light-sensor is wired to the terminal strips on right of divider, and the PIR is wired to terminal strips on the left of the divider (the middle groove of the breadboard).

    5v goes to terminal strip 1. So we connect a jumper wire from this terminal strip to the 5v output on dayPi

    GND goes to terminal strip 2. So we connect two other jumper wires to this terminal strip. One will go to GND on nightPi and one will go to GND on dayPi

    Digital output goes to terminal strip 3. So we connect two other jumper wires to this terminal strip. One will go to GPIO 7 on nightPi, and one will go to GPIO 7 on dayPi. [note this is BOARD numbering for the GPIOs]

    If you zoom into the photo, you should be able to follow the paths of the wires!

    Here's a close-up of the breadboard. You can see how 5v input to the light-sensor and PIR is sent to the first terminal strip, and then it meets the 5v output from the Pi there. Thus we supply power to the light-sensor and PIR. And GND from the light-sensor and PIR goes to the next terminal strip, and from there is sent to GND on both dayPi and nightPi. And digital outputs from light-sensor and PIR go to the next terminal strip down, and they meet wires which take them to the GPIO PINs on dayPi and nightPi.

    And in the below photos you can see the wires connecting with the dayPi and nightPi:

    3. Great! We are all ready to share the PIR and light-sensor now!

  • Testing of ElephantAI

    Neil K. Sheridan10/19/2017 at 20:18 0 comments

    So, at this stage we have built, tested, and setup, all the computational components of the system, including their associated circuits, and we have added our final code to each of them!

    Remember we have three primary computational components:

    -- Elephant Detection Device

    which is comprised of:

    dayPi (Raspberry Pi)

    nightPi (Raspberry Pi)

    -- Elephant Deter Device

    which is comprised of:

    PiZero

    AmpZero

    Now let's get started on testing how the system interacts!

    For testing, I suggest having a monitor and keyboard/mouse for each of the three primary computational components. And of course, we want all of our associated circuits setup! You don't need the solar recharging circuit. You can just test the IR illumination devices by connecting a 12v battery to the optically isolated switching circuit.

    Let's see how we get on!

View all 59 project logs

View all 8 instructions

Enjoy this project?

Share

Discussions

shouvik7 wrote 04/19/2018 at 07:21 point

hello Neil,

I am currently working on this project using a raspberry pi 3. I was able to successfully download tensorflow version 1.1.0 and also keras version 2.5.1. but when i run the following command 

model = InceptionV3(weights='imagenet')

I get an error saying 

TypeError: softmax() got an unexpected keyword argument 'axis'

How do i proceed from this. I'd appreciate it if you help me out here.

  Are you sure? yes | no

Thomas wrote 04/25/2017 at 19:17 point

Hi Neil, I think this here might be of interest: https://hackaday.io/project/561-summerize-total-motion-in-a-video

  Are you sure? yes | no

Neil K. Sheridan wrote 04/25/2017 at 19:38 point

Hi, Thanks! That does look interesting! Will go thru it!

  Are you sure? yes | no

Thomas wrote 04/25/2017 at 19:56 point

When I think of elephants, the first thing that comes into my mind is how they move. The idea of using optical flow for creating a "movement spectrogram" is intriguing. The first couple of lines in the Wikipedia article on optical flow point to interesting approaches:
https://en.wikipedia.org/wiki/Optical_flow

  Are you sure? yes | no

Thomas wrote 01/28/2018 at 16:22 point

Another bit of information you may find interesting:

https://www.nytimes.com/2018/01/26/science/bees-elephants-.html

  Are you sure? yes | no

Neil K. Sheridan wrote 03/26/2017 at 20:21 point

yes! I'm going to post it later this week! I'm just taking out the bits that aren't relevant so it is easy to follow! 

  Are you sure? yes | no

jessica18 wrote 03/26/2017 at 17:23 point

can you post the code

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates