Close
0%
0%

Raspberry Pi Sensor Network

A scalable, expandable home monitoring system designed for ease of use.

Similar projects worth following
The goal is to have an easily configurable Raspberry Pi platform that can be deployed singly or across a network. Using a variety of sensors, and a wide definition of "sensor", the system will be able to provide varying levels of situational awareness, as needed.

Situational awareness means that the system will intelligently notify you using various reporting mechanisms. Critical alerts (the house is flooding/on fire) should raise a ruckus, whether you're sitting in the kitchen, sitting on the road, or at work.

Easily means that anyone can do it. Initially, the project is going to be developed by amateur hackers and programmers. Those folks have technical capabilities that dwarf many folks. It needs to be easier.
An optional end goal of being able to control local devices, remotely, is planned.

Remember all the cool "villain lair" sensor networks from the movies? It's that.

The system is being designed to either work as a single node (headless or not) that pulls in and logs sensor feeds. The goal is to allow a user (or another system) to poll the logging node(s) for data. Eventually, it is hoped that the system will be able to make semi-intelligent decisions based upon multiple sensor feeds (room occupancy, visitors versus burglars, pet versus human discrimination, etc). If multiple nodes are present, the system is designed to report sensor information in a relevant fashion.

By "relevant", the system is being designed to gather sensor data and report it using specifically defined criteria (too hot, too cold, periodic updates, or a change in criteria based on other sensor inputs). The node itself is constantly monitoring the sensor feeds using multi-threading. Only when specified criteria are met, are the sensor feeds being reported to the logging system. No one needs to be continually updated that the flood sensor is dry, but it's important to know that it's still running.

In the same vein, the system needs to be able to "give" data as needed. I would like to be able to pro-actively pull data to the logging node(s)/reporting node(s) as needed. Has the garage heard ANY sounds in the last hour, not just the sounds that meet the reportable criteria?

With just me doing this, everything's being written in Python 2.x. I tried doing a project before in 3.x, and wound up back-porting it. I'm going with the path of least resistance.

Initially, the sensors consist of I2C sensors. I'm working on a full-on library for the DS18B20 thermometers. I'm going to tie in one of my other projects, a Pi Network monitor (https://hackaday.io/project/6686-raspberry-pi-macip-monitor), so that it can also report. I'd like to eventually include cameras (both the Pi camera, and USB cameras), digital I/O's (Motion detectors being high on that list), and a bluetooth proximity tracker. With those sensors included, I should be able to not only determine room occupancy, but who is in the room.

Currently I'm the only one working on this. I've got a wonderful full-time job that keeps me busy, another job on the side, and I try to help out in the community. If you've got constructive ideas, thoughts, etc, share 'em! If you want to pitch in, let 'er rip! If you want to use the code, please let me know what you're doing with it. If you're going commercial with it, cut me in for a slice. Student loans are a horrible thing, and it's a burden I want to get rid of in the worst way possible.

So... now that there's a Contest... Why is this important?

Simple. The internet ushered in a completely new age of capability. At our fingertips is more data than someone just a couple of decades before could have hoped to encounter in their lifetime. In several lifetimes.

The Internet of Things can revolutionize that for ourselves, not just the world around us. There is an amazing amount of information around us that slips through our fingers, that our devices aren't smart enough to handle (yet). The first step is gathering the data, but the next step will be tailoring the systems around us to use that data, in context.

The goal isn't to make a simple Raspberry Pi sensor, but to make a system that can bring in all of the data that is thrown at it, and determine meaningful conclusions from that data.

InitialI2CLayout.fzz

I2C with resistor pull-up examples. Two different boards populated.

fzz - 5.19 kB - 04/07/2016 at 18:08

Download

  • 1 × Raspberry Pi Pick your version. This is being built on Raspbian Jessie
  • 1 × SD Card At least 4GB, we're not using anything less than 8GB since we're gonna need storage
  • 1 × BMP180 I2C Temperature and Pressure Sensor https://www.adafruit.com/products/1603 However, I'm now using an Ebay GY-80 or GY-87 board with a slew of additional I2C sensors on it
  • 1 × 5V 2A USB power supply I've tested down to .7A power supplies with no real problem, but I'm also not using wireless or other power draws on the board

  • Raspberry Pi 3 Migration

    staticdet503/28/2016 at 21:36 0 comments

    I got sick this weekend, so things slowed down a bit. I migrated both this project and the Pi Network Monitor over to Raspberry Pi 3's running Jessie. I used this opportunity to improve my system configuration documentation (my static IP procedure was dated and wasn't working correctly, as was my Adafruit git's and installs). With the fever, I got to some pretty stupid levels, and flailed against the computer for a bit. If I'm flailing against the install procedure, then it's not a good one.

    For that reason, I'm going to repeatedly engage in this procedure when installing new nodes across the house. Right now I've got two working nodes at my desktop. The main loggging Node with BMP180 and ADXL345 sensors operational, and a WIP (Work in progress) network monitoring Node. These two will likely live here. I want the network sensor Node attached to the internet gateway (for now), while I'm learning better techniques for discovering what is on the network. The main logging node needs to be close to my desk because I'm experimenting like crazy on it.

    I did set up a dedicated work area for setting up new Pi's. I'm planning on repurposing a couple of 2's that I have around the house for sensor Nodes to distribute around the house. I'm also going to grab two more 3's for distributing around the house, to possibly include outside the house, but specifically to use to develop BlueTooth and WiFi sensors.

    Hopefully by the end of the week I'll be able to set up a network around the house of thermal and pressure sensors

  • New version inbound!

    staticdet503/24/2016 at 20:40 0 comments

    I'm committing kind of a jerk move to my 6 followers (Well, five... Thanks mom, but I know you don't know what a Raspberry Pi does, even though you've gotten me a couple for gifts...). I'm going to post up and talk about an update before it actually hits GitHub. I've got some time, and it's going through a burn test as we speak.

    So... Changes. I've given up on "mainlining" the threading of sensors. I'm going to branch that on GitHub, and learn a bit there as well. I got target fixated on threading, and almost drove the project into the ground because of it. That's stupid, and so many projects fail because folks are overly fixated on solving problems that can be ironed out later.

    Instead, the Node now polls sensors as fast as it can, tabulates the values over a minute, and then reports once a minute (once for each sensor). In the report is the name of the sensor, the min/max values for the last minute, the average value for the last minute, and the number of collected samples. This last value (the number of collected samples) is going to be used as a performance metric, so I can gauge how code and sensor changes impact the number of measurements per second. Currently, the system is measuring around 40 sensor reads per second. As an additional bonus, the system is still checking for alert values when it is polling sensors, and will issue alert reports if it detects a high value (I need to test this, but that's a simple matter of changing a test value.

    But this means that I can start working on some more fun items. The network monitor is going to get ported over soon. The "Meat" of that project is already done, and works. What I need to do is take the basic functionality, and make it as a callable sensor that reports as new MAC/IP's are active on the network, or when new MAC's associate with the network. The next step of that project will be to include a WiFi sensor that uses a promiscuous mode to check out the non-affiliated traffic in the area. I would love to be able to "fingerprint" this kind of activity to get a better idea of who is in the vicinity.

    The other sensor that I want to get "in the can" is a motion sensor. These range from cheap and easy (digital pin'd IR motion sensors) to slightly more complex (I2C and SPI range finders). I want to incorporate range sensors as motion sensors because of the number of dogs that I have in the house. Plus, it'll give me another method for determining if doors are open or closed. Finally, there are times where it would be really useful to know if someone is walking in a certain direction (say, towards my bedroom door in the middle of the night).

  • Logging Protocol: Part 2

    staticdet503/16/2016 at 22:59 0 comments

    Yesterday's post was big, and ties directly in to this one. Today I'm going to describe the actual logline contents themselves, along with an explanation for the entry (when needed).

    First, it's important to realize that we're logging each event in two places. There's a local log file kept on each node, and a log kept on the main sensor node. This is both for backup, but primarily to allow for higher resolution sensor logs to be kept locally, and automatically purged as they age. Keeping dozens of nodes' high resolution data for weeks to months is going to get prohibitively expensive in terms of network bandwidth and memory.

    For the most part, the logs on the sensor node are similar to the logs sent to the main logging node. This is because the process is essentially the same. The sensor node is constantly polling sensors, and either kicking out a log line when it detects an alert, or when it reaches a timing threshold (a minute since the last report, five minutes since the last report, etc).

    Each sensor is it's own programmatic construct (object), that reports to another programmatic construct (object), the Node. The sensor object decides when the conditions are right for it to report, and then it hands it's data to the Node. The Node takes the sensor data (typically the name of the sensor board, sensor type, sensor value), and passes it to the Node. The Node takes this sensor data and packages it with Node specific data (Node Name, IP), and also appends the send time. All of this data is packaged in a Python Dictionary, which is ultimately Pickled (cPickled, to be precise) before it is both saved locally, and sent over the network to the logging Node.

    Once the message is received at the logging Node, the message is unPickled, and the receipt time is appended. This is done so that time in transit can be (possibly) measured. This may give an indication of network performance issues down the line (and the logging node could create a sensor looking at this data).

    Since the loglines are stored as Python dictionaries, parsing them is pretty quick. I've already written some commands that can be used recursively to pull a sub-set of the logs, and create a pseudo-log that can then be re-parsed for a different criteria (Show me all loglines from yesterday that contained temps).

    Times are recorded in epoch time because that is MUCH simpler to deal with when programming. Python makes it very easy to put in an epoch time and spit out a human readable format.

    Because the logging protocol uses Python dictionaries, with all of that flexibility, I can log all kinds of events. When a Node is booting up, it is logging the boot-up process, including configuration and connectivity. When the system is "finalized", we'll be able to write a log parsing file that re-creates configuration files, or even archives them automatically when a change is noted in the configuration. This feature will really help novice users when they are building their own system. Make a change and it breaks things, execute a couple of simple commands, and the system automagically reverts back to a working version.

    During boot-up, the system is checking the I2C bus, and logging active I2C addresses that it finds. This is another nice feature that will help users get their sensor network up and running. It's not a big issue, but for a newbie, every little obstacle seems like a massive hurdle.

    I'm working on bringing on my other project, a Raspberry Pi MAC/IP monitor, over to this project. The goal of that project was to detect and later classify when new systems came on to the network. I'm hoping that I can leverage this as a "presence" sensor, and detect when cellphones are in the residence/sensor net. The next effort, on that part of the project, is to get a network sensor node up and operational, and then have it report back to the main logging system using the already developed Node architecture. Eventually I'd like to be able to detect when Bluetooth devices are in range of nodes,...

    Read more »

  • Logging Protocol: Part 1

    staticdet503/15/2016 at 21:08 0 comments

    If you've read previous logs, you'll see that one of the goals is to be able to log numerous different kinds of sensors, events, and activities. To do this, and to be able to use these logs to drive future decisions, the logging protocol needs to be flexible. To enable a wide range of users to be able to use the system as a whole, the logging protocol needs to be relatively open ended.

    For these reasons, I chose to use Python dictionaries to store data (either cPickled or as a standard Python output file). This is going to allow me to store values with verbal keys that define what the value is. The log file (after it is unPickled) is going to be human readable (allowing for easy use by others).

    In Python, dictionaries are considered an "Indexed" data type. An easy "Indexed" data type is a numbered list (in Python, called a "List", curiously enough). For a numbered list, you have a value associated with a whole number. Each value is assigned a whole number, starting with zero. If you add a new value to the list, it automatically gets the next number in line.

    Dictionaries are like that, but instead of a number, you have... something else. We call that something else a "Key". Each dictionary can have a large number of key/value pairs, but each key must be unique. The values can be anything. I can use a python dictionary to describe a car: car = {'Wheels' : 4, 'Color' : 'Blue', 'Speed' : '6', 'Seats' : '5', 'Roof': 'Convertible'}. I can also use it to describe a motorcycle: = {'Wheels' : 2, 'Color' : 'Blue', 'Speed' : '6', 'Seats' : '1.5'}

    When I need to get values back, I can easily get it. car['Color'] returns 'Blue'. It gets a little cooler, because I can do things like: 'roof' in car, and get back True. But if I did that with motorcycle, it would report back False.

    Looking at the logging function of the project, this gets really useful. I can easily write code that grabs log files, and only pulls data that meets specific criteria. If I'm looking to check temperatures for the house, between certain times, I can immediately cull any loglines that don't fall within my time frame. I can then cull any loglines that don't contain the "Temp" key in the dictionary. Then I can parse all of the remaining loglines and pull the temperatures that I need. Further, because the dictionary can have almost any arbitrary length, I can include a fair amount of detail there. I can further limit the logline pull so that it only checks the garage temperature sensor. Or all the temp sensors except for the one in the furnace room. This is going to give tremendous capability to the user, down the line.

    The drawback is that this isn't a tiny file format. It's human readable, and if it isn't Pickled (cPickled), then it is flat text. Users could configure the system to use flags and tags to designate certain values and locations, but it would lose the easy human readability. They'd probably have to use a chart or a computer program to glance at logs, and I want to avoid that.
    With that bulk, comes a slower system. So far, this hasn't been an issue. But the plan is to use Pickling to compress the loglines whenever possible.

    Pickling is how Python "serializes" objects, turning them in to a byte stream. This byte stream can be used for a lot of things, but is typically (in my experience) written to disk. The opposite of Pickling is unPickling, which "unpacks" the object, allowing Python to easily use it. Pickle is written in Python, and you can take it apart and see how it works. Pickle is kinda slow, so someone wrote cPickle, which is written in C. This makes it up to a thousand times faster.

    Pickle helps us three ways (so far). First, it really makes it easy to save things. You can Pickle almost any object. Once an object is Pickled, you can save it to file, or send it over the network (among other things). Next, Pickled objects tend to be smaller. This will save space on the drive. Finally, since the loglines are being Pickled, and then sent over the network, if the...

    Read more »

  • Sensor polling

    staticdet503/14/2016 at 19:35 0 comments

    I'm going to be storing a lot of data, so I need to develop a protocol for storing and handling that data. I touched on it briefly, but this is actually the central, critical part of the project. The goal here is to be able to store any kind of data, and have the system (or user) be able to pull the data they need to make decisions.

    Currently, I'm looking at three memory modalities. The first, very short-term memory, is happening within the programs themselves. Certain sensors you want being polled continuously, and either reporting a min/max, an average, or not reporting unless certain conditions are met (Or a combination of these traits). I don't care that the door is closed, 30 times per second. But, I want to know when it opens (and I may want to know for how long it opened), and I definitely would like to be able to send a request to make sure it is closed.

    For this reason, many sensors (if not all) are going to be set-up to be continuously polled. In testing I was able to poll a BMP180 for temperature and pressure over 30 times per second. If I can continue polling air pressure that quickly, then I stand a good chance of detecting door opening events through that mechanism. For temperature, i've already written a delta alarm. If the temperature raises by more than a set amount, an alert condition is triggered. This is in addition to a floor and ceiling alert (my house never needs to be warmer than 35c, or colder than 5c).

    Now, the sensor itself is currently logging temperatures and pressures every minute. Eventually I'll drop that down to every 5 or 10 minutes. I can do that because they system is still polling the sensors continuously. If an alert condition is met during those 5 or 10 minutes, the system is immediately going to kick out an alert to the logging system, and that alert will run through a rule based process to trigger notifications.

    Currently, when the system logs temperatures and pressures every minute, it is logging it both locally (in a file on the sensor node), AND over the network to a main logging node. Smart systems will be in place to truncate the locally saved data, so that it isn't saving days of temperatures, minute by minute.

    However, there are going to be times where it will be useful to know trends with more resolution than once every 15 minutes. In this case, the main logging node (or user) will be able to reach back to the local sensor node and request higher resolution trend data.

    It is hoped that eventually the systems will be smart enough to recognize specific event data (for example: The barometric profile of a door opening and closing, or what it "looks like" when I wake up in the morning, etc). In that case, the sensor node can also report an event to the logging node, with raw data verification/transmission upon request. Days down the road, it won't matter what the barometric profile looked like, as long as the event "Door opening and closing detected" is logged at the right time.

    So, three memory modalities: Very short term memory within the program loop itself, looking at no more than a couple of minutes. Short term memory maintained on the Pi itself, lasting no longer than a couple of days. Then long term archival storage on a main logging node, but with the lowest resolution.

  • Lessons learned

    staticdet502/26/2016 at 12:52 0 comments

    In operations, we usually do this afterwards (ok, sometimes we'll yell this out to each other if we're really in the shit), but I learned something today.

    OK, not really "learned", as I've been doing this for years at one of my jobs. It sunk in two days ago that I should maybe try it here.

    SAMBA!

    For those that don't know, samba is a Server Message Block (SMB) protocol implementation for Linux/Raspbian. With SMB enabled, it also allows for the Windows Naming Service (WINS) to be used. This allows the system to join a Windows workgroup.
    So what?
    This allows you to map a directory as a network drive within Windows (including Windows 10, as I just did it).

    My issue was the lack of decent coding environments within my Raspberry Pi. I tried a couple, and they weren't doing it for me. I tend to use Notepad++ at one of my jobs, because that's what we use (I don't need to hear about what is better here. When you've got one other option, and your current situation sucks, go with the other option).
    I tried getting other editors, but in the end, just sitting down and mapping the drive onto my Windows machine has work terrifically. I'm going to have to do this with most of my Pi builds.

    I used this guide (I probably didn't need it... maybe):

    Samba Share

    One important note for the new folks: sudo is your friend! When you're setting this up, some of the comments are prefaced with "sudo", some aren't. It turns out that setting your smbpasswd needs sudo, but isn't written that way in the guide (Check the comments, towards the end). Real important. This won't work without it.

    Once you've got your samba share set-up, you can use any editor that you want, as long as it is available on your networked computer.

    Now, if I can just figure out why my threading commands aren't working....

  • Progress to date

    staticdet502/19/2016 at 22:54 0 comments

    The code up on GitHub works... Kinda. The multi-threading is definitely broken.

    Currently, the system is started up using two terminal windows (I'm running the Pi's headless, over TightVNC). The first window is opened, and test.py is run. All this does is turn on the logging routines. When data is received, you'll see the Python dictionary displayed in this window.
    The second window runs Node.py. This is the "meat" of the system. When the program is first run, it will ask for the logging IP, and then a series of questions based upon the sensors that the Node is recognizing. Once that is done, the Node (is supposed to) start a thread for each sensor. Each thread continuously polls the sensor, as quickly as possible, and kicks out sensor data when it meets criteria (too hot or too cold, for the BMP180 thermal sensor, and once a minute for all the other sensors). Right now, that's the whole thing.
    A ton of effort was put into the auto-configuration of the Node. The system currently runs an i2cdetect and pulls out the answering I2C addresses. If new addresses are detected, the user is prompted to identify the sensor, which is then saved in the configuration file. The same is going to be possible for DS18B20 thermal sensors because each of them carries a unique identifier.
    Data is reported as a Python dictionary, and then "Pickled", a method for serializing objects in Python. The data reporting may change in the future, but is currently preferred because of the ability to identify values easily, and not have to deal with maintaining sequencing in the data stream. This allows one section of the program stack (IE: a sensor object) to create a report, and pass it off to the next step (IE: the node) which then appends some more data, before sending it over the network where it is received, and the receiving Node can append more data to the dictionary (Examples: Nodes can have sensors. Nodes can also store data in memory. Ideally, a sensor is being run in its own thread. When a sensor thread kicks out a data point (or more), the Node takes the data, appends the time, the name of the node, and the IP address. The Node Pickles the data dictionary before it sends it down the network to the memory Node (Currently, one node is doing all of these functions). When the receiving Node gets the data, it unPickles it, appends a receive time (this seems like a good idea right now), and saves the data to disk.

    Currently, multi-threading isn't working. I can't get any thread but the BMP180 thermal sensor to run correctly. A fair amount of troubleshooting went in to this before the holidays, and I'm going to have to pick this up again. I had it running initially, but I completely screwed up my object instantiation, so my objects (sensor threads) were crosstalking and polluting each other's "short term memories".

    I need to develop a graceful check for the logging Node. Right now, if the Node starts up and attempts to send data to an incorrect IP, the system suffers a full-stop error. Eventually, I'd like the Node to have a way of automagically looking at the local network and finding the local memory node. Failing that, I would like the Node to save sensor data locally, until a memory node is identified. There's also the possibility of always saving data locally, and sending reports (when possible) to a main memory Node.

    I'm working on using GitHub. I'm very new at it, so there's going to be some starts and stops. I've got the basics down, when I consult a cheat sheet... I'll work on that. And code commenting. My coding is a mess.

  • Init

    staticdet502/19/2016 at 12:49 0 comments

    This is one of my longest running projects. The seed was planted in my head when I was a kid, watching all sorts of movies. Those guys could always walk over to their computer and call up a cool display of their house (castle, secret lair, whatever), and instantly get all kinds of cool information.

    I dipped my toes in the water when USB came out. That water was definitely TOO COLD. I'm not a programmer or electrical engineer (seriously, look at my code. Not a programmer). I didn't have the time or motivation to jump into that (I was working hard at being a medic).

    A couple more years later, and I got my hands on a USB development board. I can't even remember the name of it. But I jumped in with both feet on the almost $100 board... Only to find that they wanted more money for the development environment. I poked and played with it a bit. Got some servos to move (ooohhhhh shiny), and then something even better hit me in the face.

    I was looking online for more hacks to make better use of this board, and I came across something called an Arduino. It took some doing, but I finally got one in my hands. A gizmo-board designed with the non-programmer, non-electrical engineer in mind. SOLD! I spent a couple of years playing around with it (ohhhh... blinky lights), and even contemplated using it attached to a computer, to provide some level of a local sensor and actuator system.

    It took a couple more years (and the purchase of a ethernet shield) before I tried making a sensor node. I poked at it for awhile, got sick for a month, and was basically stuck at home. While exhausted, feverish, and anorexic, I had some kind of wild hair up my ass about building this thing. I churned out a workable project that was sending feeds to Pachube. Sweet.

    Then I broke it. To this day, I have no idea what I did. I got better, went back to work, and tried to climb out of debt (medics don't make money). I went back to revise my code (because it was a mess), and broke it. I tried to revert it, and it wouldn't work. I never got it back up and running. Switched projects to take a break, and it was gone...

    To be honest, I'd been frustrated by the project on a couple of levels. The Arduino is a fantastic learning platform, and it's a great tool to have on the bench when you want to just build something quick and relatively easy. However, you can hit the limitations pretty quickly, in terms of power and language. Yeah, I could start learning C (See above, not a programmer), and I even did learn some C when I was working on the next project (Airsoft Smartgun Controller). But I'm not great with languages, and I really think in terms of Python (and I muddle through with Arduino-speak).

    When I discovered the Raspberry Pi, I was blown away. I bought two as soon as possible, one to gift to a hacker buddy with a swarm of small children. Either he'd play with it, or it would get given to the kids. Or both. Mission accomplished.

    The other one, I tinkered with. Holy crap, the things I learned.

View all 8 project logs

Enjoy this project?

Share

Discussions

staticdet5 wrote 04/06/2016 at 14:24 point

I'm mucking about with switching the logging protocol over to UDP.  The rationale behind this comes at a couple of levels.  The networks that we're looking at are pretty simple, home networks.  The message traffic is not sensitive, and the eventual goal is to have the Nodes capable of confirming receipt (if needed), or even reaching out and asking a sensor Node for a status update.  Finally, there are a couple of logging/response/announcement messages that may dramatically benefit from a broadcast type packet.

If folks have thoughts on this, hit me up.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates