Avatar

Whaziznamez

@whaziznamez / whaziznamez.tumblr.com

Searching for Truth, within and without... (:
Avatar
reblogged

9to5.tv

Art exhibition has announced an open call for submissions for projects which incorporate livestreaming:

9to5 is a month-long digital art exhibition in Atlanta that dissolves the boundary between artist and audience by way of an experimental livestream and other emerging interfaces.
Unlike other art exhibitions, the ideal 9to5 experience is online, where patrons can interact with the projects and performances broadcasted, and influence the final artworks using our custom built suite of inputs.
If you have a potential project that utilizes livestreaming and / or digital inputs to blur the line between ‘viewer’ and ‘collaborator’, we want to hear from you. From data visualization to interpretative dance, podcasts to prose, AI to the humanities, all artists and technologists with an interest in experimentation are welcome.
Talent will have access to a ton of new technology courtesy of Georgia State University.
For talent that needs it, travel to Atlanta will be covered by 9to5, along with lodging and food.
All submissions will be considered, but there are limited slots available in each category.
                 APPLY        BEFORE                    17        JULY        17          
Source: 9to5.tv
Avatar
reblogged

Natural Human-Drone Interaction

Research project from Eirini Malliaraki illustrates ideas for drone programming, from gesture to emotion recognition:

1-month graduate project // Royal College of Art & Imperial College// May 2017
Taking inspiration from the interaction between falconers and their birds of prey, as well as from common daily gestures, cybernetics, dance, and robotics, several themes were explored, namely: - a gesture-based interaction scheme that attempts to create a more intuitive and natural way to communicate with aerial robots - ways in which aerial robots can become more autonomous by interpreting their environment in richer ways - ways in which they can communicate their intentions and give feedback - ways in which an aerial robot can understand and react to human emotions and eventually influence our behaviour
Parrot AR Drone, Node js, Javascript, Affectiva Emotion analysis SDK
Source: vimeo.com
Avatar
reblogged

Aston Martin DBR1, 1956. One of only 5 ever made is to be auctioned by RM Sotheby’s at the Pebble Beach Concours d'Elegance. This is the first time that a DBR1 has been offered for public auction, with Sotheby’s expecting it to top $20 million. That could make it the most valuable Aston Martin ever sold at auction and the most valuable British car of any marque

Photographs by Tim Scott 

Avatar
Avatar
sololoquy
Creating a life that reflects your values and satisfies your soul is a rare achievement. In a culture that relentlessly promotes avarice and excess as the good life, a person happy doing his own work is usually considered an eccentric, if not a subversive. Ambition is only understood if it’s to rise to the top of some imaginary ladder of success. Someone who takes an undemanding job because it affords him the time to pursue other interests and activities is considered a flake. A person who abandons a career in order to stay home and raise children is considered not to be living up to his potential — as if a job title and salary are the sole measure of human worth.You’ll be told in a hundred ways, some subtle and some not, to keep climbing, and never be satisfied with where you are, who you are, and what you’re doing. There are a million ways to sell yourself out, and I guarantee you’ll hear about them.To invent your own life’s meaning is not easy, but it’s still allowed, and I think you’ll be happier for the trouble.

Bill Watterson  (via h-o-r-n-g-r-y)

Avatar
reblogged

Computational Video Editing for Dialogue-Driven Scenes

Research paper from Standford University and Adobe Research provides proof of concept system to simplify and automate video editing processes:

We present a system for efficiently editing video of dialogue-driven scenes. The input to our system is a standard film script and multiple video takes, each capturing a different camera framing or performance of the complete scene. Our system then automatically selects the most appropriate clip from one of the input takes, for each line of dialogue, based on a user-specified set of film-editing idioms. Our system starts by segmenting the input script into lines of dialogue and then splitting each input take into a sequence of clips time-aligned with each line. Next it labels the script and the clips with high-level structural information (e.g., emotional sentiment of dialogue, camera framing of clip, etc.). After this pre-process, our interface offers a set of basic idioms that users can combine in a variety of ways to build custom editing styles. 
You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.