• Narcissus on Display

    Diego05/27/2018 at 14:58 0 comments

    Carefully wrapped and disassembled, Narcissus made its way to Miami to be a part of RAW Popup. At the core of the new Design district, this event inaugurated a new space for art, over 50 artists coming together to exhibit their work. 

    Narcissus @RAW Pop Up, Miami. 2018. / Photography by Javier Sanchez


    Narcissus (2018) was debuted alongside a previous artwork of mine I rebuilt, the Murder Machine (2014). I documented my process here on Hackaday, make sure to check it out.

    Murder Machine @RAW Pop Up, Miami. 2018.   /   Photography by Javier Sanchez

  • Final assembly

    Diego05/10/2018 at 00:36 0 comments

    The full size 3D print was a success. There was very little left to chance so I wasn't surprised that everything worked out and fit right. I made the final print on a Stratasys UPrint SE Plus, using Ivory filament. I'm very happy with the result.

    The base for the sculpture is hand made from three pieces of CNC'd Baltic Birch plywood. I've glued them together and sanded to give them a solid look and feel. 

    The wooden base received a coating of oil based water seal. It's meant for outdoor wooden structures that are in contact with water. It should protect the wood from water damage for the duration of the installation.

    Initially, I was going to fill the pool with Castin' Craft Resin, but returned to the original idea of using water. I will play with resin after the show is over. Resin is a more permanent water effect that will not require any maintenance. But all my initial tests were unsuccessful. They poured on smooth, but got bumpy as they cured.

    If anyone has run into a similar problem, please let me know, I'm curious. My theory is that since the wood was not sealed before pouring the resin the imperfections of the surface got picked up and amplified by the resin.

    After many many many tests, the final assembly and 3D print are ready to be assembled at the gallery.   🔥

  • Initial 3D print tests

    Diego04/18/2018 at 15:25 0 comments

    Working code was step one, now my focus is entirely on the physical part of the sculpture. I've been working to get the size of the sculpture just right so the screen feels perfectly proportional. 

    I spent a few hours at SVA's Visual Futures Lab using their Structure scanner and Skanect. That was much easier than what I'd been doing. Very easy to set up, a lot less cumbersome, and a lot faster than using the Kinect and my laptop. So big shoutout to the people at the VFL.

    Below the results of the scan after a little work on Blender and Meshmixer.

    I'm now refining the final size and mounting brackets for the screen. Used the "tube" function in Meshmixer to create a channel for the screen's cabling, it now looks like it's going straight into my heart. 

    The print will be on a simple platform that will hold water/resin and reflect the light from the screen.

    The 3D printing tests are going well and I'm on schedule. 🔥🔥🔥

    I'm using a Formlabs Form 2. I'm cropping the print to test specific parts of the sculpture (and to save resin). 

    Oh, yeah, I also printed a case for my Pi 3 B+ (Note: a slim case for a Pi 3 will not fit on a Pi 3 B+ thanks to the new jumpers behind the USB ports. I tried drilling the case to add the hole... that didn't work so well)

    Learnings from 3D printing test:

    1. Save resin by hollowing out the print! (DUH! Why didn't I know about this before... who knows, here's a good tutorial from maker's muse)
    2. Blender and Meshmixer are tough to start using, but they quickly open up and become good friends.
    3. I need a 3-button mouse... 🐁

    Next steps

    • Finalize size of body.
    • Iterate screen mount.
    • Final Print
    • Define platform size
    • Start platform production
    • Finish and assemble.

  • Working Beta Code + OLED

    Diego04/08/2018 at 23:04 0 comments

    Beta version of code is ready.

    Everything works as I hoped and now I can parse Twitter for #Selfies.

    After a long Twitter hiatus, my face made a short but important appearance on my feed.


    Lessons from Beta

    • face_recognition is an amazing and easy to use library. It is resource intensive and to get near real-time facial recognition you'll need to optimize your code to fit the needs of this library.
      • Installation for this library is a pain in the butt. Mostly because of dlib.
      • Reduce source file size. I've found that turning Twitter's high-res images into a 256px image gives me the most speed without sacrificing accuracy. I wouldn't recommend going any smaller.
    • Adafruit's SSD1351 OLED screens work with Raspberry Pi thanks to the Luma.Oled library. I'll be writing a short no BS tutorial on how to get this working in the next few days. It works with most of their screens, so it's a good resource.
    • Setting up a headless Pi is super easy if you use Pi Bakery. I cannot recommend it enough. I'll be writing a little tutorial about it. This should be the standard way to flash an SD card with Raspbian, honestly I don't know why anyone wouldn't use this.


    Next Steps

    • 3D Scan of subject.
      • Variations of pose
    • 3D model for printing.
      • Including base + wiring channels
    • 3D Print + Finishing
    • Final assembly
      • Wiring & final testing

  • Alpha Code v0.1 - Pi Setup complete.

    Diego03/23/2018 at 14:56 0 comments

    After a lot of trial and error I finally have a working prototype of the Python code that will power the whole experience. It runs really well on OSX.

    The basic pseudo-code:

    Twitter Authentication (REST API)
    Get all tweets with "#selfie"
    For every tweet:
      Parse JSON for IMG file.
      IF image is present
        Find faces in image
        Crop random face
        Display face in OLED display
      ELSE
        break 

    It works! 🔥🔥🔥🔥

    All of this will be executed on a Raspberry Pi. Initially I was using a Pi Zero W, but the face recognition part was taking too long and since space isn't really an issue I could benefit from a little extra computing power; so upgraded to a Pi 3 B+. Now I need to optimize and cut down time to process the image and find the face. Multi-threading might be the answer.

    The trickiest part so far has been the initial setup of all the required libraries, took me about 9 tries (about 15h each) on the Pi Zero only to realize it was too slow. The Pi 3 B+ was easier to set up, and I'll be writing a quick summary of everything I learned in the hopes others will find it useful. A Raspberry Pi quick setup guide for noobs like me.

    I will be traveling over the next week, planning to use plane time to write and down time to code. Stay tuned for new updates.

    Hardware:

    • Raspberry Pi 3 B+
    • 1.5" OLED SSD1351

    The libraries I'm using thus far:

    • Twython (Docs)
    • Face Recognition (Git)
    • Luma OLED (Docs)
    • urllib (Python)
    • Pillow (Python)
    • Threading (Python)

    Next steps

    1. Optimize code to be multi-threaded
      1. Separate Twitter, File handling, OLED, and Facial recognition into individual threads.
    2. Begin 3D scanning experiments to get the right 3D model.
    3. 3D printing tests, material explorations, and sizing.