- RSS Channel Showcase 2184364
- RSS Channel Showcase 7389146
- RSS Channel Showcase 5741663
- RSS Channel Showcase 9941090
Articles on this Page
- 05/18/18--13:43: _Arevo Develops Carb...
- 05/18/18--13:43: _Design Job: Work Ha...
- 05/18/18--13:43: _Neat Details of Elo...
- 05/18/18--13:43: _Urban Design Observ...
- 05/18/18--13:43: _The Camera, Transfo...
- 05/18/18--20:13: _NYCxDesign Recap Da...
- 05/21/18--20:46: _How Do You Capture ...
- 05/21/18--20:46: _Anti-Squirrel Bird ...
- 05/21/18--20:46: _BMW's Minimalist Pe...
- 05/21/18--20:46: _Reader Submitted: T...
- 05/22/18--21:03: _DIY Gear-Driven Ply...
- 05/22/18--21:03: _Raymond Loewy's Ill...
- 05/22/18--21:03: _Yo! C77 Sketch: How...
- 05/22/18--21:03: _Reader Submitted: T...
- 05/22/18--21:03: _The Resulting Produ...
- 05/18/18--13:43: Urban Design Observations: Why is This Minivan Lifted Like That?
- 05/18/18--13:43: The Camera, Transformed by Machine Learning
- 05/18/18--20:13: NYCxDesign Recap Days 1-3
- 05/21/18--20:46: How Do You Capture and Move a Shark Using Plastic Boards?
- 05/21/18--20:46: Anti-Squirrel Bird Feeder Designs
- 05/21/18--20:46: BMW's Minimalist People Mover
- 05/22/18--21:03: DIY Gear-Driven Plywood Panel Lift
- 05/22/18--21:03: Yo! C77 Sketch: How to Choose the Right Perspective
- 05/22/18--21:03: The Resulting Products of MakerBot's NYCxDesign Challenge
When I hear "3D-printed bike from Silicon Valley" I start to roll my eyes, but this company is actually onto something.
Arevo is a start-up that has figured out how to simplify and re-size, rather inexpensively, the carbon fiber manufacturing process. Typically, carbon fiber is difficult and expensive to integrate into objects because the fibers must be impregnated with resin, laid into a mold and baked in an oven to bind everything together. Obviously the oven must be larger than the mold, which limits the size of the object.
Arevo has shifted this process around in a revolutionary way. They've taken an off-the-shelf, six-axis robotic arm and fitted it with a deposition head of their own design. This head can not only lay carbon fibers anywhere in 3D space, but simultaneously spits out a thermoplastic material at the same time. That material binds the fibers together as they're being laid, eliminating the baking step and the need for an oven.
The implications for this are enormous. Archimedes is claimed to have said something to the effect of "Using a lever I could move the earth, if only I had a place to stand," and Arevo's development is similar in that it's merely a matter of being able to position and move the robot in order to print pieces of any size, like an airplane wing or a hangar roof.
To showcase their technology, the company is producing a decidedly humble bicycle. According to Reuters,
The process involves almost no human labor, allowing Arevo to build bicycle frames for $300 in costs, even in pricey Silicon Valley.
"We're right in line with what it costs to build a bicycle frame in Asia," Miller said. "Because the labor costs are so much lower, we can re-shore the manufacturing of composites."
There's a short (unembeddable) video about their process that you can watch here.
The Ultra Music Festival Team is looking for a Senior Visual Artist with Cinema 4D knowledge. We want someone who is super creative and tapped into all aspects of pop culture - TV, music, movies, memes, Internet sloths, etc. This position also requires a good collaborator who works well in a team, and someone with a strong visual point of view who isn't afraid to use it. We welcome new ideas and fresh thinking - surprise us, pitch cool stuff, make things we didn't ask for. Ultra Music Festival is a fun and expressive place to work and we want someone who is both of those things too. We work hard but we play hard too - often at the same time - and we expect you to do the same.View the full design job here
Yesterday Elon Musk's Boring Company held a publicly-streamed informational session where they revealed details of their plans to create a traffic-beating tunnel network beneath Los Angeles. Whereas the plan had previously envisioned automated sleds that would whisk passenger cars through the network, it has now evolved into the idea of building mass-transit, 16-person pods for which tickets would be sold at $1 a pop. Whether or not that comes to fruition will be based on both study and feedback from volunteers willing to take free rides on the test track they're currently working on.
Something I found super cool is that the Boring Company will use their digging procedure to create saleable construction materials as they dig. After learning that 15-20% of the cost of digging a tunnel is paying for the displaced dirt to be hauled away, Musk and co. have supposedly developed a way to compress the dirt on-site, at high pressure which, when combined with "a small amount of concrete" will yield cinder-block-sized bricks with a compression strength of 5,000 PSI (i.e. "Rated for California seismic loads," in Musk's words).
"Even if you give away the brick," explained Steve Davis, Boring Company Director, "you've just cut the cost of tunneling by 15-20%."
The presentation is a good hour long, but if your boss leaves the office early on Fridays and you'd like to scan through it, here it is:
Walking the dogs and this minivan caught my eye.
It doesn't make sense. I've seen vehicles with lifted suspensions before, which is typically done to add clearance for rough terrain. But the ground effects on this minivan are mere inches off of the ground.
Well, the graphics on the car (and the handicapped symbol on the license plate and rear right passenger door) should give it away.
I looked it up and VMI, or Vantage Mobility International, is an Arizona-based company that retrofits cars to make them handicap-friendly. I think that's a pretty awesome space to work in, and we've checked out the sector before; peep the links below.
Scroll through the hundreds of icons for "camera" on Noun Project or the 124,706 community-generated drawings of cameras on Google Quick Draw, and you'll notice they're all remarkably similar. Together, they suggest a shared cultural understanding of a camera: a classic point-and-shoot.
But the cameras we encounter every day bear little resemblance—in form or function—to this vestigial object. New capabilities in software, new hardware formats and imaging technologies, and emerging user behaviors around image creation are radically reshaping the object we know of as the "camera" into new categories. Perhaps the most impactful influence on the camera is being brought about by computer vision: empowering cameras to not only capture various kinds of images but to also parse visual information—effectively, to understand the world.
Software trained on vast datasets of labeled images can recognize things like vehicles, dogs, cats, and people, along with facial features, emotions, and second-order information like movement vectors and gaze direction from raw images and videos. Timo Arnall explored this emerging capability of machines to interpret images in Robot Readable World, a collection of computer vision videos from 2012. In the years since, machine learning has advanced by leaps and bounds in both accuracy and speed—see, for instance, the more recent open-source YOLO Real-Time Object Detection technique for comparison—with the potential to transform how we interface with cameras and computers alike. In media analyst and venture capitalist Benedict Evans' recent Ten Year Futures presentation, he discussed the potential impact of machine learning on cameras in the near future: "You turn the image sensor into a universal input for a computer. It's not just that computers see like people, it's that computers can see like computers, and that makes all sorts of things possible."
Cameras enabled with machine learning therefore have the potential to both automate existing functions of the camera as a tool for human use and extend its creative possibilities far beyond image capture. One notable example is the recently launched Google Clips camera. Clips is a small camera with a special superpower: it understands enough about what it sees that it can take pictures for you. You set it on a shelf or a table or clip it to something, and on-board machine learning allows it to continuously observe, learn familiar faces, detect smiles and movement, and snap a well-composed candid picture, all while allowing you to be present in the scene instead of behind the viewfinder.It also does all of this without connecting to the internet.
As computing hardware has been miniaturized and made more affordable and machine learning algorithms more efficient and accurate, we'll likely see more cameras—and objects of all kinds—imbued with intelligence. According to Eva Snee, UX Lead on the Clips project, there are a lot of technological and user benefits to this approach. By learning on-device instead of communicating to servers in the cloud, the device can maintain the privacy of its user (all clips are stored locally on-device unless you choose to share or save them to your photos library on your phone) and operate much more efficiently in terms of both battery power and speed. "No one gets this data except you," says Snee. "That was very deliberate: you don't need a Google account, you don't need Google Photos."
Clips suggests a future of cameras as photographers; where decisions about the moment of capture are further shifted to the device. In the early design phases of the project, Snee says that the team asked themselves, "we're building an automatic capture camera, why does it need a button?...This is an amazing breakthrough—let's just make a camera that does it all for you." The Clips team stopped short of removing the button entirely, however. Snee explains that in addition to helping to train a camera to appreciate the inherently subjective, personal nature of photography, the button remained functionally significant:
"Every other camera that a human has interacted with in their life has a button. So it felt extremely foreign, it didn't make sense to people, and it actually made it harder for them to really understand how to use this thing and to understand even what capture means. That was a core design goal that we changed our position on—we need to give people agency and control just like they would have in a traditional camera."
A camera like Clips that can choose an appropriate moment to capture is really only the tip of the iceberg when considering the larger implications of machine learning. As the capacities of computer vision systems continue to evolve rapidly, what else might a camera that understands what it sees be capable of? How might these capabilities shift our relationship with our devices?
Pinterest Lens points to a potential future for the camera as a kind of sampling device—perceiving phenomena it can interpret from its environment and reporting back to the user. Every time you pin an image to a board on the Pinterest platform, you are creating a set of associations between it and other images on the board, which helps Pinterest's machine learning systems to categorize images. Lens leverages these insights to give its smartphone app a semantic understanding of on-camera objects, and use it for "visual discovery"—essentially querying the world for information relevant to your interests.
The camera in this context is a kind of interpretation device for a user's lived experience, extracting salient information from what it sees and reporting back with useful information rather than serving as a tool for composing and capturing a moment in time.
Beyond interpreting phenomena at capture, machine learning—and especially techniques like General Adversarial Networks (GANs)—extend the camera's expressive potential into profoundly unnerving territory. These algorithms have the remarkable ability to synthesize realistic images from the emergent patterns in a database of images. Since their characteristics are drawn from real conditions, they produce a kind of uncanny fantasy of reality: they capture alternative conditions in alternative presents. And as a result, they suggest the potential of a camera without a camera, the full dissolution of the camera's physical form into software.
Take for instance the paper Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, which showcases a technique to translate the characteristics of one set of photos into those of another—for instance, transforming horses into zebras or oranges into apples. Or GPU maker NVIDIA's research paper "Progressive Growing of GANs for Improved Quality, Stability, and Variation" and the uncanny images it produces while exploring the latent space of a database of celebrity faces.
These algorithms use real images as a basis for manufacturing reality. In this sense, they are reminiscent of the 5th Dimensional Camera, a terrific early project by the speculative design studio Superflux. This camera was essentially a prop, designed to suggest the possibilities of the many worlds theory in the emerging science of quantum physics. It's a fictional camera that captures parallel worlds, the parallel possibilities between two moments in time.
The images that GANs produce have a similar quality: extrapolating from conditions in the world to explore plausible alternate realities. None of these images are "real" per se, but as their features are drawn from the world, they are both somehow made of the real and in composite, made unreal. As a result of this inherent ambivalence, reality gets a bit wobbly. Similar systems for extrapolating on image sets are empowering a new arsenal of Fake News and manipulated-to-an-unhealthy-extreme advertisements, transforming political attitudes and images of self in the process.
Consider the already existing pressures on image manipulation in advertisements and the way we present ourselves on social networks as described in Jiayang Fan's piece in the New Yorker, China's Selfie Obsession:
"I asked a number of Chinese friends how long it takes them to edit a photo before posting it on social media. The answer for most of them was about forty minutes per face; a selfie taken with a friend would take well over an hour. The work requires several apps, each of which has particular strengths. No one I asked would consider posting or sending a photo that hadn't been improved."
Pressures to automatically enhance images are likely to continue. Imagine an internet ad adjusting its image content on demand, to match a model trained on an individual viewer's interests. Or an Instagram filter guaranteed to increase follow count by curating your feed and manipulating your images imperceptibly towards a more desirable ideal. Perhaps in such a world of truth-bending we'll take on-camera image manipulation for granted as long as it furthers our interests. But where might this lead us?
In Vernacular Video, a keynote talk at the 2010 Vimeo Awards, sci-fi author and futurist Bruce Sterling took the notion of a camera as reality-sampling-device one step further to explore the future possibilities of of what he called a "magic transformation camera", capable of total understanding of a given scene.
"In order to take a picture, I simply tell the system to calculate what that picture would have looked like from that angle at that moment, you see? I just send it as a computational problem out into the cloud wirelessly. [...] In other words there's sort of no camera there, there's just the cloud and computation."
Sterling distills photography into its core action: the selection of a specific vantage point at a specific moment in time. Yet, in the future, this "decisive moment" is instead reconstructed by querying a database with comprehensive knowledge. He later describes imaging and computational power embedded in paint flakes in the walls, a kind of sensory layer on everything, capable of observing everything.
This future concept inspired the early development of DepthKit, a toolkit built by Scatter in the context of efforts by the creative coding community to explore the expressive potential of the Kinect and similar structured-light scanners capable of bringing three dimensionality to recorded images. The technique, called volumetric video (on which you can read more by Scatter co-founder James George here and here), allows a real scene to be captured with depth information, or even in-the-round from various vantage points, with perspectives on the scene lit and composed after the fact.
So, what to make of all this? If we consider the implications of products like Google Clips and Pinterest Lens, algorithmic approaches like GANs, and Sterling's magic transformation camera as indicators for the future of the camera, it suggests a camera as far more than a point-and-shoot.
Google Clips suggests a near future of cameras as things endowed with agency, capable of observing, composing, and selecting moments to capture for us. And furthermore, it suggests future cameras as learning platforms, evolving over time in response to human use. Pinterest Lens suggests cameras as reality querying devices, interpreting our surroundings for information of value to us. GANs extend this possibility into generative territory, a near future of cameras as reality-extrapolation or -distortion devices, building on learned models to produce convincing synthetic images. Bruce Sterling's speculative future camera and the volumetric filmmaking techniques it inspired suggest a near future camera whose act of taking a photograph is one of searching through recorded moments from a total history of lived experience.
All of these cases suggest a camera with a very different kind of relationship to its operator: a camera with its own basic intelligence, agency, and access to information. Beyond a formal evolution away from the artifact of the "camera", these novel capabilities should complicate our expectations of what a camera is capable of. And increasingly, we may need to acknowledge a certain speciation has happened: these strange new cameras deserve categories of their own in order to contend with the competing visions of reality they suggest.
We're three days into NYCxDesign, and our calendars couldn't be more packed! Although we haven't hit ICFF yet, we've discovered plenty of smaller wonders around the city. There's so much going on that if you haven't visited our NYCxDesign Map, we suggest you do so before you get too overwhelmed. Below is a list of our favorite shows so far:
If you're looking to visit an exhibit that explores the past, present, and future (or even parallel universes?) of design, WantedDesign Brooklyn would be a good bet. With exhibitions like Oui Design, which features several prominent French product designers, you get a look into the trends of the design world today. Student presentations such as SVA's "Radical Times" exhibit that explores speculative pasts and futures or Carolien Niebling's "Sausage of the Future", however, present something more surreal that will force you to ponder the numerous ways in which designers play a role in shaping our culture and future.
Furnishing Utopia 3.0
Who said chores had to be boring and mundane? The third edition of Furnishing Utopia asked 26 international designers to explore and reinterpret the focused work and cleanliness of the Shakers, which they regarded as a path to enlightenment. Holding things such as spray bottles, watering cans and handles in high regards, this exhibition will make you crave cleaning your home, making the "Sensory Isolation Booth" at the back of the exhibition extremely fitting. In the booth, visitors are asked to test out the various brooms in the exhibition by sweeping up different materials.
BALANCED/UNBALANCED at Colony
The pieces on display at Colony in Soho, including these "Bumpy Growers" by Poritz & Studio, play around with the theme of Balanced / Unbalanced. The show is a relaxing escape from the busy streets of SoHo (especially Canal Street), and will remain open all the way until the 24th.
Ladies & Gentleman Studio for MUJI
Ladies & Gentleman Studio's installation for MUJI isolates the beauty of the materials used in iconic products from the Japanese retailer. The unassuming materials heaven takes MUJI products and displays them atop their raw materials. for instance, ceramics are placed on raw slabs of clay and wooden bookcases are rested on wood shavings. If you need a zen moment, stop by this installation, touch the raw materials and instantly feel revived.
Sight Unseen Offsite
Sight Unseen Offsite's main location may have downsized, but the quality of works shown at the crowd favorite show remain strong. This year's show is heavily focused on unexpected collaboration pairings, including the mini show-within-a-show Field Studies, which paired celebrities with designers to create surprising results. Think a mirror designed by Bower and Seth Rogan and a piano designed Wall for Apricots and Jason Schwartzman.
The Future Perfect
The Future Perfect is putting on quite the show at 55 Great Jones Street. The dark, dimly but beautifully lit space highlights tropical designs from Chris Wolston among a few other furniture and lighting designs with lavish materials.
At Patrick Parrish Gallery on 50 Lispenard in Soho, a series of sculptures by artist Carl Emil Jacobsen exhibit an earnest passion for material exploration. Jacobsen's work involves gathering found materials such as tiles, stones, volcanic ash, and chalk to make his own bespoke pigments, and the pieces serve to highlight the beauty of color that derives purely from nature.
American Design Club Presents: Built at Canal Street Market
American Design Club has set up post toward the back of Canal Street Market, featuring an exhibition packed with design inspiration. From fuzzy chairs to quick 3D printed planters, you'll keep discovering more and more gems in this small corner of Design Week.
Curious to see more from NYCxDesign? Follow our stories & posts this week on Instagram and plan your visit with our NYCxDesign Map!
On June 30th the New York Aquarium will open "Ocean Wonders: Sharks!" This is a 57,500-square-foot, $146 million exhibit that has taken over a decade to create, as the newly-built aquarium was badly damaged by Hurricane Sandy.
Now coming into the home stretch, the aquarium employees need to get the sharks from their old tank into the new tank, which is about four blocks away. How the heck do you move a shark? The procedure, which involves plastic boards, is human-resource-intensive and seems pretty primitive:
My main design gripe is with those plastic boards, which the Times article refers to as "breaker boards:"
I can't imagine their original commercial purpose, but Jeez Louise, with a $146 million budget I'd have liked to see something with proper handles on the non-shark side rather than finger cut-outs.
"You just have to be quick," marine biologist and shark supervisor Hans Walters told the Times. "If they start thrashing you pull your hands away."
I'll never complain about transporting a dog again.
Designing an object that will accommodate one animal, but not another animal of similar size, is a tricky endeavor. A bird feeder is a good example: How do you allow your avian friends to access the seed, without greedy neighboring squirrels helping themselves?
One solution might be to cover the seed-dispensing apertures with a wire mesh. But here we can see that another design feature of this bird feeder--a wide aperture that allows humans to easily load it--can be exploited:
Another idea is to encircle the feeder in a cage. But this, too, can be defeated:
This squirrel has even managed to squeeze its entire body into the cage:
A more clever solution might be to exploit physics and gravity. If the feeder reacts differently to the weight of a squirrel than of a bird, it could be made to spin or rotate in such a way as to fling the squirrel off.
Squirrels have managed to counter that.
How about an umbrella-shaped baffle over a suspended feeder? That would prevent the squirrel from climbing around it, no?
So, what does work?
And placing a spinning feeder the proper distance away from anything on which a squirrel could use to stabilize itself:
This is sort of like the corporate version of looking inside someone's workshop to see what kind of nifty little jigs they've built for themselves to simplify production. Upon learning that some of their factory and logistics employees walk up to 12 kilometers a day inside their facilities, BMW's Group Research and Technology House turned their engineering might towards creating a personal transportation device. The Personal Mover Concept that they came up with is pure form-follows-function.
"It had to be flexible, easy to maneuver, zippy, electric, extremely agile and tilt-proof – and, at the same time, suitable for carrying objects," explains Richard Kamissek, head of BMW's Operations Central Aftersales Logistics Network.
The body platform of the Personal Mover Concept is 60 centimeters wide and 80 centimeters long, so that a person can stand comfortably on it and still have room for larger, heavy objects. Two wheels at the rear corners of the platform and two support wheels at the front ensure that it does not tip over, even in tight bends. The two front support wheels rotate 360°, which greatly increases maneuverability. The handlebar and drive wheel are sunk into the middle of the body platform at the front.
The handlebar contains the entire electrical system, the battery and the drive wheel, and can be rotated 90° to the left and right, allowing the Personal Mover Concept to turn on the spot. A thumb throttle for regulating speed is integrated into the right grip. This control is used to start the Personal Mover Concept, switch the light on and off, select the driving mode or check battery status. For safety, there is also a bell for warning other employees. The left grip operates the brake and a dead man's control.
The PMC uses regenerative braking and can hit 25 KPH (15.5 MPH). It can be plugged into a regular outlet for recharging, and the battery's good for 20 to 30 kilometers.
While the PMC is real--if BMW Blog is to be believed--apparently it has yet to be batch-produced and distributed. Says Kamissek, "We hope to start using it as soon as possible!"
"90°" is a minimalist bike stand designed to display your bike in a unique way.
Here's a great piece of practical prototyping. Frank Howarth is finishing his ceiling with plywood panels, meaning a lift would be handy. Why buy when you can DIY?
We know Howarth as an architect and designer, and here he pulls out the mechanical engineering muscles too.
Here he walks us through the design and execution of this project:
I now want to build one, even though I have zero use for it. There's a name for this disease, yeah?
Raymond Loewy, the father of industrial design, once drew up this nifty chart showing how from factors evolved into the early 20th century:
We know it's hard to see, so let's blow it up a bit:
What I wouldn't give to see him still alive and completing the chart up to the modern day.
Are any of you game to try? And/or do you have ideas for different objects you'd show? If you draw something up, post it in the comments and we'll make you famiss.
Beyond having accurate perspective, choosing the right perspective angle can make or break a sketch. In this video I'l show you how you can use perspective to give a convincing sense of scale to the object you are designing.
As always, if you have any questions or comments on the techniques shown, leave them in the comments below. What other techniques would you like to see?
The UKB chair brings traditional Japanese craftsmanship to a relaxed and refined contemporary lifestyle. Its minimalist silhouette and precise alignments are made possible only through the intricacy of hand-cut joinery. The broad radius and clean lines pay homage to Streamline architecture and the sweeping horizontal landscapes of LA, the home of Base 10 Furniture.
To celebrate this year's NYCxDesign, MakerBot hand-picked 17 New York City designers and put them to the test of designing and prototyping an object to improve daily life. The 13 individual designers and 2 design duos were each given a MakerBot Replicator and a few rolls of filament to bring their objects to life in about five weeks. Needless to say, the broad brief yielded extremely diverse results.
The resulting products were put on display at the MakerBot headquarters in downtown Brooklyn. The exhibit and party drew a large crowd, including a panel of esteemed judges (ed. note: Core77 editors were part of the panel) who reviewed the projects and voted on their favorites.
We particularly enjoyed this challenge because in a sea of shiny, completed furniture and home object exhibitions during design week, this showcase instead put emphasis on makers and their varying design processes. Below are images of every product that came out of the competition, accompanied by descriptions written by each designer themselves. We've indicated before the description if the project received an award.