Articles on this Page
- 05/16/18--13:12: _Rizzoli Releases Ne...
- 05/16/18--13:12: _Design Job: Slice I...
- 05/16/18--13:12: _Steven M. Johnson's...
- 05/16/18--13:12: _Tools & Craft #...
- 05/16/18--13:12: _Urban Design Observ...
- 05/17/18--13:28: _Fun for Designers: ...
- 05/17/18--13:28: _Getting Every Last ...
- 05/17/18--13:28: _Design Job: Changin...
- 05/17/18--13:28: _An Example of the I...
- 05/17/18--13:28: _Yanko Design joins ...
- 05/17/18--13:28: _Furniture Subscript...
- 05/18/18--13:43: _Arevo Develops Carb...
- 05/18/18--13:43: _Design Job: Work Ha...
- 05/18/18--13:43: _Neat Details of Elo...
- 05/18/18--13:43: _Urban Design Observ...
- 05/18/18--13:43: _The Camera, Transfo...
- 05/18/18--20:13: _NYCxDesign Recap Da...
- 05/21/18--20:46: _How Do You Capture ...
- 05/21/18--20:46: _Anti-Squirrel Bird ...
- 05/21/18--20:46: _BMW's Minimalist Pe...
- 05/16/18--13:12: Steven M. Johnson's Bizarre Invention #67: The BikeVest
- 05/16/18--13:12: Tools & Craft #95: Making Stuff, and Other Human Impulses
- 05/16/18--13:12: Urban Design Observations: Abandoned Storefront Graffiti Etiquette
- 05/17/18--13:28: Yanko Design joins Coroflot Design Network
- 05/18/18--13:43: Urban Design Observations: Why is This Minivan Lifted Like That?
- 05/18/18--13:43: The Camera, Transformed by Machine Learning
- 05/18/18--20:13: NYCxDesign Recap Days 1-3
- 05/21/18--20:46: How Do You Capture and Move a Shark Using Plastic Boards?
- 05/21/18--20:46: Anti-Squirrel Bird Feeder Designs
- 05/21/18--20:46: BMW's Minimalist People Mover
The late George R. Kravis, the philanthropist, art collector and founder of the Kravis Design Center, owned what is probably the world's finest collection of industrial design. And he's not into eye candy: "As a collector, George is interested in an object's function, form, manufacture, and materials while also considering the user and the design process," as his website states.
A new book by Rizzoli, "Industrial Design in the Modern Age," combed through Kravis' collection to highlight a couple hundred of the more significant objects. UK-based design historian Penny Sparkes, who wrote the introduction, states that "The originality of this book lies in the fact that the objects are organized according to function rather than by designer…That allows a full range of objects to be included—from 'designer' to anonymous—and that is the uniqueness of the Kravis collection also."
Rizzoli describes the book thusly:
An ambitious new survey of industrial design from 1900 to the present day in the United States, Europe, and around the world, as told through selected objects from the George R. Kravis II Collection.
Destined to become a new classic in the design genre, this major work summarizes an enormous topic—the creation of everyday objects for mass production and consumption from 1900 to the present—and shows how these products have become both symbols of the modern age and harbingers of our future. It covers the work of the heroes of modern and post-modern design, from the early pioneers—Dreyfuss, Bel Geddes, and Eames—to the leaders in the field today, including Starck, Newson, and Ive.
More than 200 objects from the Kravis Design Center's collection are highlighted as important exemplars of industrial design. A wide range of media is represented, including furniture, metalwork, ceramics, and plastics. New research by contributing scholars has uncovered illuminating details about each object that help tell a more complete story of design in the past 100 years.
Amazingly, the company has placed a freely-accessible 75-page preview of the book online. The real deal will set you back $85.
Core Home is the fastest growing company in the housewares industry. More importantly is how we got to be where we are. In short, we’re just a bunch of creative-type, product loving people with too much passion and not enough time. If this excites you, please send your resume ASAP!View the full design job here
Last week a gentleman who runs a local maker space invited me to teach some hand tool classes at the space. I was happy to have the discussion but we got hung up by a central question: How do you get students to the point at which they can produce something?
My own answer thus far as a teacher has been to teach classes in which the product is the skill itself. I teach classes in making dovetails, sharpening, installing hinges with hand tools, and so on.
I admire those who are developing schools teaching a class with a PRODUCT - and we're offering an exciting one in June on building a collapsible shave horse, so I guess TFWW is also in this group - but these classes often highlight the tension between several contrasting human impulses.
As woodworkers, we feel making things, especially with our hands, is deeply satisfying. People also love learning new skills, and most people also enjoy the social aspects of learning in a group.
But we also have conflicting desires. The desire not to be the laggard, in danger of being left behind the group. The desire for instant or near-instant gratification. I want it now! And - crucially - our identities as consumers.
Nowadays shop class has been consigned to the dustbin of history for most people. Many students come to woodworking classes thirsting for the satisfaction of creation. Andrew Zoellner, the new editor of Popular Woodworking, wrote an inspiring call to arms, The Joy of Woodworking - Out on a Limb as his inaugural editorial. "We're here to inspire people to make more of the stuff they have in their lives and to learn the virtues of craft," he writes.
For those who make our livelihood from making stuff with our hands, or teaching others to make stuff with their hands, getting paid is also a challenge.
Hand tools teach us to be responsive to subtleties and ignore the pace of contemporary society. Tuning out competing fundamental needs is a much harder act -- one I am still learning.
PS - My wife is actually the chief writer of this post. I am a lucky fellow in a bunch of ways, and at this moment grateful to be with someone who can turn a bunch of thoughts into a blog entry under deadline.
N.B. The pictures are of some spoons that TFWW's Pate, who will be teaching the City Dweller's Collapsible Shave Horse class, made on her shave horse.
This "Tools & Craft" section is provided courtesy of Joel Moskowitz, founder of Tools for Working Wood, the Brooklyn-based catalog retailer of everything from hand tools to Festool; check out their online shop here. Joel also founded Gramercy Tools, the award-winning boutique manufacturer of hand tools made the old-fashioned way: Built to work and built to last.
On a commercially volatile strip of Lafayette Street, another store has gone out of business, probably pushed out by rising rents.
I've noticed that within days of a store closing down, kids will come through at night and tag the place up, including on the glass.
But right next door is a store that remains in business, and the taggers have spared it. I suppose they either have some kind of code, or fear getting caught for vandalizing a going concern.
I came across this "Can you draw this?" test, which asked if you could trace the following without lifting your pen off of the paper:
I don't like that one because it involves folding the paper, which I think is b.s. This second one was a little more satisfying to figure out:
But then I got to this one:
For some reason, looking at it I simply could not figure it out. After running it through my head I kept getting stuck.
In frustration, I grabbed a pen to try solving it on actual paper--then nailed it on the first try. So for me it was a good example of how some problems are easier to solve by just diving in and doing them, rather than trying to solve them in your head first. (Some problems, not all. If your problem has to do with plumbing, do yourself a favor and call a freaking plumber.)
These are from Harry Houdini's "Book of Magic: Fascinating Puzzles, Tricks and Mysterious Stunts," which has been scanned and can be viewed for free here.
Those who want a physical copy and have 25 bucks to spare can order it here.
My value of what I'd later learn was called "industrial design" was formed early on. At 14 I started working in a restaurant owned by a hands-on penny pincher, and letting a single molecule of a condiment go to waste was a fire-able infraction. So we were taught to "marry" ketchup bottles at the end of the night, and I learned how to get every last drop out of a glass Heinz bottle.
That was accomplished by the means of this little gizmo:
I never learned who designed the darn thing, even though ten years after first encountering it I was working as a bottle designer. But the ethos of both that device and my boss' nagging never left me, and to this day I take great pride in getting every last drop out of detergent, shampoo and soap bottles. I don't have any use for motor oil bottles, as I don't own a car (which may change soon, stay tuned), but I was tickled when I came across this:
"Being environmentally concerned," writes an unnamed editor over at The Family Handyman, "I try to completely drain oil containers when servicing vehicles and lawn equipment."
The image is from this roundup of "PVC Hacks," and as with all such collections, you'll find some of the ideas silly and others clever and useful. Here are the ones I dug:
I often work alone and am always on the lookout for how to move heavy things by myself. Leapfrogging the pipes would be a damn sight easier than trying to haul this thing:
Staining is my least-favorite thing in the world, and here's how to do a crapload of spindles or other narrow objects quickly and efficiently, without having to fire up the spray gun and compressor:
I've recently been fretting over how to transport a variety of sharp items like chisels, saws and a combat spear (don't ask), and while this isn't the perfect solution, it is starting to give me some ideas:
You can see the rest here.
If you love innovation, here's your chance to make a career of it. You'll work hard. But the job comes with more than a few perks. Imagine what you could do here! At Apple, new ideasView the full design job here
This latest series of videos is the best, most in-depth look at the industrial design process from Eric Strebel yet!
We get to see him making design decisions on the fly that consider how to communicate things to the user, use Photoshop to work out the CMF, explain why he prefers to build a prototype by hand rather than digitally and more. He also films a minor disaster to show you how he deals with it under deadline, something every designer should learn to weather.
(If you missed Part 1, it's here.)
This week we're happy to welcome Yanko Design into the Coroflot Design Employment Network. Our newest partner job board launched a last weekend, and serves as a great additional feature to the main Yanko Design site.
Since its inception in 2002, Yanko Design has been showcasing innovative and inspirational design content. Over the years they've become known for curating exceptional design work from around the world, and have build a tremendous community and following in the process.
The team at Yanko Design will be featuring jobs on their Linked In page and their Facebook page, both of which have significant followings.
If you haven't looked for a new job recently, there's no better time to start than right now! Check out the Yanko Design Job Board, and while you're there take a look through some of the inspirational design projects on display.
New furniture subscription service Kamarq debuted their first collection earlier this week, but it was a short lived celebration for designers Nicola Formichetti and PJ Mattan, as many design savvy New Yorkers were quick to point out the blatant similarities between their Elephant tables and Ana Kras' Slon collection with Matter Made from back in 2015.
Keep in mind that not only do the pieces look almost identical, but their names actually mean the exact same thing—slon literally translates to elephant in Kras' native Serbian language:
Ana Kras was quick to make a statement on social media, directly addressing the situation:
It took countless callouts on his personal Instagram, for Mattan to finally make the following statement, resulting in the majority of Kamarq's inaugural collection being pulled from the site just a few hours post launch:
"This week we debuted our inaugural collection and campaign with Japanese brand Kamarq. Part of the collection was heavily inspired by the elegant long legs of Mario Bellini’s set of Il Colonnato tables from the 1970’s. We acknowledge that certain pieces could also be attributed to the work of designer Ana Kras for Matter, and out of respect for Ana & Matter, we will be removing these pieces from the collection. Kamarq is an ever-evolving brand that will strive to work with many different designers, and we remain respectful of and committed to supporting the creative community at large."
In addition to Mario Bellini's Il Colonnato tables, Formichetti and Mattan also note Memphis Group as a strong influence. However many are saying this statement is not enough, as the duo doesn't directly apologize to Kras and Matter Made. On that note, it seems relatively easy to pull items from a subscription service whose model is based on pulping and reforming new pieces after use. The design duo explained to us that this process is extremely fast-paced and that new furniture can be created and shipped out from their factory in Indonesia within a couple of days.
Kamarq's tables and large containers are, without a doubt, very similar form factors as Kras' collection, but for the sake of playing devil's advocate here, the bold tables consist of cylinders and cubes, which are basic shapes and difficult to claim ownership of. What really ends the argument for Formichetti and Mattan, though, is the proof that they were aware of Kras' collection, including its name, well before they started designing this collection back in September 2017. If you're going to copy someone, at least have the common sense to bury, or in this case delete, the evidence.
What's your take on this situation? If you were Kras and/or Kamarq, how would you have handled yourself?
When I hear "3D-printed bike from Silicon Valley" I start to roll my eyes, but this company is actually onto something.
Arevo is a start-up that has figured out how to simplify and re-size, rather inexpensively, the carbon fiber manufacturing process. Typically, carbon fiber is difficult and expensive to integrate into objects because the fibers must be impregnated with resin, laid into a mold and baked in an oven to bind everything together. Obviously the oven must be larger than the mold, which limits the size of the object.
Arevo has shifted this process around in a revolutionary way. They've taken an off-the-shelf, six-axis robotic arm and fitted it with a deposition head of their own design. This head can not only lay carbon fibers anywhere in 3D space, but simultaneously spits out a thermoplastic material at the same time. That material binds the fibers together as they're being laid, eliminating the baking step and the need for an oven.
The implications for this are enormous. Archimedes is claimed to have said something to the effect of "Using a lever I could move the earth, if only I had a place to stand," and Arevo's development is similar in that it's merely a matter of being able to position and move the robot in order to print pieces of any size, like an airplane wing or a hangar roof.
To showcase their technology, the company is producing a decidedly humble bicycle. According to Reuters,
The process involves almost no human labor, allowing Arevo to build bicycle frames for $300 in costs, even in pricey Silicon Valley.
"We're right in line with what it costs to build a bicycle frame in Asia," Miller said. "Because the labor costs are so much lower, we can re-shore the manufacturing of composites."
There's a short (unembeddable) video about their process that you can watch here.
The Ultra Music Festival Team is looking for a Senior Visual Artist with Cinema 4D knowledge. We want someone who is super creative and tapped into all aspects of pop culture - TV, music, movies, memes, Internet sloths, etc. This position also requires a good collaborator who works well in a team, and someone with a strong visual point of view who isn't afraid to use it. We welcome new ideas and fresh thinking - surprise us, pitch cool stuff, make things we didn't ask for. Ultra Music Festival is a fun and expressive place to work and we want someone who is both of those things too. We work hard but we play hard too - often at the same time - and we expect you to do the same.View the full design job here
Yesterday Elon Musk's Boring Company held a publicly-streamed informational session where they revealed details of their plans to create a traffic-beating tunnel network beneath Los Angeles. Whereas the plan had previously envisioned automated sleds that would whisk passenger cars through the network, it has now evolved into the idea of building mass-transit, 16-person pods for which tickets would be sold at $1 a pop. Whether or not that comes to fruition will be based on both study and feedback from volunteers willing to take free rides on the test track they're currently working on.
Something I found super cool is that the Boring Company will use their digging procedure to create saleable construction materials as they dig. After learning that 15-20% of the cost of digging a tunnel is paying for the displaced dirt to be hauled away, Musk and co. have supposedly developed a way to compress the dirt on-site, at high pressure which, when combined with "a small amount of concrete" will yield cinder-block-sized bricks with a compression strength of 5,000 PSI (i.e. "Rated for California seismic loads," in Musk's words).
"Even if you give away the brick," explained Steve Davis, Boring Company Director, "you've just cut the cost of tunneling by 15-20%."
The presentation is a good hour long, but if your boss leaves the office early on Fridays and you'd like to scan through it, here it is:
Walking the dogs and this minivan caught my eye.
It doesn't make sense. I've seen vehicles with lifted suspensions before, which is typically done to add clearance for rough terrain. But the ground effects on this minivan are mere inches off of the ground.
Well, the graphics on the car (and the handicapped symbol on the license plate and rear right passenger door) should give it away.
I looked it up and VMI, or Vantage Mobility International, is an Arizona-based company that retrofits cars to make them handicap-friendly. I think that's a pretty awesome space to work in, and we've checked out the sector before; peep the links below.
Scroll through the hundreds of icons for "camera" on Noun Project or the 124,706 community-generated drawings of cameras on Google Quick Draw, and you'll notice they're all remarkably similar. Together, they suggest a shared cultural understanding of a camera: a classic point-and-shoot.
But the cameras we encounter every day bear little resemblance—in form or function—to this vestigial object. New capabilities in software, new hardware formats and imaging technologies, and emerging user behaviors around image creation are radically reshaping the object we know of as the "camera" into new categories. Perhaps the most impactful influence on the camera is being brought about by computer vision: empowering cameras to not only capture various kinds of images but to also parse visual information—effectively, to understand the world.
Software trained on vast datasets of labeled images can recognize things like vehicles, dogs, cats, and people, along with facial features, emotions, and second-order information like movement vectors and gaze direction from raw images and videos. Timo Arnall explored this emerging capability of machines to interpret images in Robot Readable World, a collection of computer vision videos from 2012. In the years since, machine learning has advanced by leaps and bounds in both accuracy and speed—see, for instance, the more recent open-source YOLO Real-Time Object Detection technique for comparison—with the potential to transform how we interface with cameras and computers alike. In media analyst and venture capitalist Benedict Evans' recent Ten Year Futures presentation, he discussed the potential impact of machine learning on cameras in the near future: "You turn the image sensor into a universal input for a computer. It's not just that computers see like people, it's that computers can see like computers, and that makes all sorts of things possible."
Cameras enabled with machine learning therefore have the potential to both automate existing functions of the camera as a tool for human use and extend its creative possibilities far beyond image capture. One notable example is the recently launched Google Clips camera. Clips is a small camera with a special superpower: it understands enough about what it sees that it can take pictures for you. You set it on a shelf or a table or clip it to something, and on-board machine learning allows it to continuously observe, learn familiar faces, detect smiles and movement, and snap a well-composed candid picture, all while allowing you to be present in the scene instead of behind the viewfinder.It also does all of this without connecting to the internet.
As computing hardware has been miniaturized and made more affordable and machine learning algorithms more efficient and accurate, we'll likely see more cameras—and objects of all kinds—imbued with intelligence. According to Eva Snee, UX Lead on the Clips project, there are a lot of technological and user benefits to this approach. By learning on-device instead of communicating to servers in the cloud, the device can maintain the privacy of its user (all clips are stored locally on-device unless you choose to share or save them to your photos library on your phone) and operate much more efficiently in terms of both battery power and speed. "No one gets this data except you," says Snee. "That was very deliberate: you don't need a Google account, you don't need Google Photos."
Clips suggests a future of cameras as photographers; where decisions about the moment of capture are further shifted to the device. In the early design phases of the project, Snee says that the team asked themselves, "we're building an automatic capture camera, why does it need a button?...This is an amazing breakthrough—let's just make a camera that does it all for you." The Clips team stopped short of removing the button entirely, however. Snee explains that in addition to helping to train a camera to appreciate the inherently subjective, personal nature of photography, the button remained functionally significant:
"Every other camera that a human has interacted with in their life has a button. So it felt extremely foreign, it didn't make sense to people, and it actually made it harder for them to really understand how to use this thing and to understand even what capture means. That was a core design goal that we changed our position on—we need to give people agency and control just like they would have in a traditional camera."
A camera like Clips that can choose an appropriate moment to capture is really only the tip of the iceberg when considering the larger implications of machine learning. As the capacities of computer vision systems continue to evolve rapidly, what else might a camera that understands what it sees be capable of? How might these capabilities shift our relationship with our devices?
Pinterest Lens points to a potential future for the camera as a kind of sampling device—perceiving phenomena it can interpret from its environment and reporting back to the user. Every time you pin an image to a board on the Pinterest platform, you are creating a set of associations between it and other images on the board, which helps Pinterest's machine learning systems to categorize images. Lens leverages these insights to give its smartphone app a semantic understanding of on-camera objects, and use it for "visual discovery"—essentially querying the world for information relevant to your interests.
The camera in this context is a kind of interpretation device for a user's lived experience, extracting salient information from what it sees and reporting back with useful information rather than serving as a tool for composing and capturing a moment in time.
Beyond interpreting phenomena at capture, machine learning—and especially techniques like General Adversarial Networks (GANs)—extend the camera's expressive potential into profoundly unnerving territory. These algorithms have the remarkable ability to synthesize realistic images from the emergent patterns in a database of images. Since their characteristics are drawn from real conditions, they produce a kind of uncanny fantasy of reality: they capture alternative conditions in alternative presents. And as a result, they suggest the potential of a camera without a camera, the full dissolution of the camera's physical form into software.
Take for instance the paper Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, which showcases a technique to translate the characteristics of one set of photos into those of another—for instance, transforming horses into zebras or oranges into apples. Or GPU maker NVIDIA's research paper "Progressive Growing of GANs for Improved Quality, Stability, and Variation" and the uncanny images it produces while exploring the latent space of a database of celebrity faces.
These algorithms use real images as a basis for manufacturing reality. In this sense, they are reminiscent of the 5th Dimensional Camera, a terrific early project by the speculative design studio Superflux. This camera was essentially a prop, designed to suggest the possibilities of the many worlds theory in the emerging science of quantum physics. It's a fictional camera that captures parallel worlds, the parallel possibilities between two moments in time.
The images that GANs produce have a similar quality: extrapolating from conditions in the world to explore plausible alternate realities. None of these images are "real" per se, but as their features are drawn from the world, they are both somehow made of the real and in composite, made unreal. As a result of this inherent ambivalence, reality gets a bit wobbly. Similar systems for extrapolating on image sets are empowering a new arsenal of Fake News and manipulated-to-an-unhealthy-extreme advertisements, transforming political attitudes and images of self in the process.
Consider the already existing pressures on image manipulation in advertisements and the way we present ourselves on social networks as described in Jiayang Fan's piece in the New Yorker, China's Selfie Obsession:
"I asked a number of Chinese friends how long it takes them to edit a photo before posting it on social media. The answer for most of them was about forty minutes per face; a selfie taken with a friend would take well over an hour. The work requires several apps, each of which has particular strengths. No one I asked would consider posting or sending a photo that hadn't been improved."
Pressures to automatically enhance images are likely to continue. Imagine an internet ad adjusting its image content on demand, to match a model trained on an individual viewer's interests. Or an Instagram filter guaranteed to increase follow count by curating your feed and manipulating your images imperceptibly towards a more desirable ideal. Perhaps in such a world of truth-bending we'll take on-camera image manipulation for granted as long as it furthers our interests. But where might this lead us?
In Vernacular Video, a keynote talk at the 2010 Vimeo Awards, sci-fi author and futurist Bruce Sterling took the notion of a camera as reality-sampling-device one step further to explore the future possibilities of of what he called a "magic transformation camera", capable of total understanding of a given scene.
"In order to take a picture, I simply tell the system to calculate what that picture would have looked like from that angle at that moment, you see? I just send it as a computational problem out into the cloud wirelessly. [...] In other words there's sort of no camera there, there's just the cloud and computation."
Sterling distills photography into its core action: the selection of a specific vantage point at a specific moment in time. Yet, in the future, this "decisive moment" is instead reconstructed by querying a database with comprehensive knowledge. He later describes imaging and computational power embedded in paint flakes in the walls, a kind of sensory layer on everything, capable of observing everything.
This future concept inspired the early development of DepthKit, a toolkit built by Scatter in the context of efforts by the creative coding community to explore the expressive potential of the Kinect and similar structured-light scanners capable of bringing three dimensionality to recorded images. The technique, called volumetric video (on which you can read more by Scatter co-founder James George here and here), allows a real scene to be captured with depth information, or even in-the-round from various vantage points, with perspectives on the scene lit and composed after the fact.
So, what to make of all this? If we consider the implications of products like Google Clips and Pinterest Lens, algorithmic approaches like GANs, and Sterling's magic transformation camera as indicators for the future of the camera, it suggests a camera as far more than a point-and-shoot.
Google Clips suggests a near future of cameras as things endowed with agency, capable of observing, composing, and selecting moments to capture for us. And furthermore, it suggests future cameras as learning platforms, evolving over time in response to human use. Pinterest Lens suggests cameras as reality querying devices, interpreting our surroundings for information of value to us. GANs extend this possibility into generative territory, a near future of cameras as reality-extrapolation or -distortion devices, building on learned models to produce convincing synthetic images. Bruce Sterling's speculative future camera and the volumetric filmmaking techniques it inspired suggest a near future camera whose act of taking a photograph is one of searching through recorded moments from a total history of lived experience.
All of these cases suggest a camera with a very different kind of relationship to its operator: a camera with its own basic intelligence, agency, and access to information. Beyond a formal evolution away from the artifact of the "camera", these novel capabilities should complicate our expectations of what a camera is capable of. And increasingly, we may need to acknowledge a certain speciation has happened: these strange new cameras deserve categories of their own in order to contend with the competing visions of reality they suggest.
We're three days into NYCxDesign, and our calendars couldn't be more packed! Although we haven't hit ICFF yet, we've discovered plenty of smaller wonders around the city. There's so much going on that if you haven't visited our NYCxDesign Map, we suggest you do so before you get too overwhelmed. Below is a list of our favorite shows so far:
If you're looking to visit an exhibit that explores the past, present, and future (or even parallel universes?) of design, WantedDesign Brooklyn would be a good bet. With exhibitions like Oui Design, which features several prominent French product designers, you get a look into the trends of the design world today. Student presentations such as SVA's "Radical Times" exhibit that explores speculative pasts and futures or Carolien Niebling's "Sausage of the Future", however, present something more surreal that will force you to ponder the numerous ways in which designers play a role in shaping our culture and future.
Furnishing Utopia 3.0
Who said chores had to be boring and mundane? The third edition of Furnishing Utopia asked 26 international designers to explore and reinterpret the focused work and cleanliness of the Shakers, which they regarded as a path to enlightenment. Holding things such as spray bottles, watering cans and handles in high regards, this exhibition will make you crave cleaning your home, making the "Sensory Isolation Booth" at the back of the exhibition extremely fitting. In the booth, visitors are asked to test out the various brooms in the exhibition by sweeping up different materials.
BALANCED/UNBALANCED at Colony
The pieces on display at Colony in Soho, including these "Bumpy Growers" by Poritz & Studio, play around with the theme of Balanced / Unbalanced. The show is a relaxing escape from the busy streets of SoHo (especially Canal Street), and will remain open all the way until the 24th.
Ladies & Gentleman Studio for MUJI
Ladies & Gentleman Studio's installation for MUJI isolates the beauty of the materials used in iconic products from the Japanese retailer. The unassuming materials heaven takes MUJI products and displays them atop their raw materials. for instance, ceramics are placed on raw slabs of clay and wooden bookcases are rested on wood shavings. If you need a zen moment, stop by this installation, touch the raw materials and instantly feel revived.
Sight Unseen Offsite
Sight Unseen Offsite's main location may have downsized, but the quality of works shown at the crowd favorite show remain strong. This year's show is heavily focused on unexpected collaboration pairings, including the mini show-within-a-show Field Studies, which paired celebrities with designers to create surprising results. Think a mirror designed by Bower and Seth Rogan and a piano designed Wall for Apricots and Jason Schwartzman.
The Future Perfect
The Future Perfect is putting on quite the show at 55 Great Jones Street. The dark, dimly but beautifully lit space highlights tropical designs from Chris Wolston among a few other furniture and lighting designs with lavish materials.
At Patrick Parrish Gallery on 50 Lispenard in Soho, a series of sculptures by artist Carl Emil Jacobsen exhibit an earnest passion for material exploration. Jacobsen's work involves gathering found materials such as tiles, stones, volcanic ash, and chalk to make his own bespoke pigments, and the pieces serve to highlight the beauty of color that derives purely from nature.
American Design Club Presents: Built at Canal Street Market
American Design Club has set up post toward the back of Canal Street Market, featuring an exhibition packed with design inspiration. From fuzzy chairs to quick 3D printed planters, you'll keep discovering more and more gems in this small corner of Design Week.
Curious to see more from NYCxDesign? Follow our stories & posts this week on Instagram and plan your visit with our NYCxDesign Map!
On June 30th the New York Aquarium will open "Ocean Wonders: Sharks!" This is a 57,500-square-foot, $146 million exhibit that has taken over a decade to create, as the newly-built aquarium was badly damaged by Hurricane Sandy.
Now coming into the home stretch, the aquarium employees need to get the sharks from their old tank into the new tank, which is about four blocks away. How the heck do you move a shark? The procedure, which involves plastic boards, is human-resource-intensive and seems pretty primitive:
My main design gripe is with those plastic boards, which the Times article refers to as "breaker boards:"
I can't imagine their original commercial purpose, but Jeez Louise, with a $146 million budget I'd have liked to see something with proper handles on the non-shark side rather than finger cut-outs.
"You just have to be quick," marine biologist and shark supervisor Hans Walters told the Times. "If they start thrashing you pull your hands away."
I'll never complain about transporting a dog again.
Designing an object that will accommodate one animal, but not another animal of similar size, is a tricky endeavor. A bird feeder is a good example: How do you allow your avian friends to access the seed, without greedy neighboring squirrels helping themselves?
One solution might be to cover the seed-dispensing apertures with a wire mesh. But here we can see that another design feature of this bird feeder--a wide aperture that allows humans to easily load it--can be exploited:
Another idea is to encircle the feeder in a cage. But this, too, can be defeated:
This squirrel has even managed to squeeze its entire body into the cage:
A more clever solution might be to exploit physics and gravity. If the feeder reacts differently to the weight of a squirrel than of a bird, it could be made to spin or rotate in such a way as to fling the squirrel off.
Squirrels have managed to counter that.
How about an umbrella-shaped baffle over a suspended feeder? That would prevent the squirrel from climbing around it, no?
So, what does work?
And placing a spinning feeder the proper distance away from anything on which a squirrel could use to stabilize itself:
This is sort of like the corporate version of looking inside someone's workshop to see what kind of nifty little jigs they've built for themselves to simplify production. Upon learning that some of their factory and logistics employees walk up to 12 kilometers a day inside their facilities, BMW's Group Research and Technology House turned their engineering might towards creating a personal transportation device. The Personal Mover Concept that they came up with is pure form-follows-function.
"It had to be flexible, easy to maneuver, zippy, electric, extremely agile and tilt-proof – and, at the same time, suitable for carrying objects," explains Richard Kamissek, head of BMW's Operations Central Aftersales Logistics Network.
The body platform of the Personal Mover Concept is 60 centimeters wide and 80 centimeters long, so that a person can stand comfortably on it and still have room for larger, heavy objects. Two wheels at the rear corners of the platform and two support wheels at the front ensure that it does not tip over, even in tight bends. The two front support wheels rotate 360°, which greatly increases maneuverability. The handlebar and drive wheel are sunk into the middle of the body platform at the front.
The handlebar contains the entire electrical system, the battery and the drive wheel, and can be rotated 90° to the left and right, allowing the Personal Mover Concept to turn on the spot. A thumb throttle for regulating speed is integrated into the right grip. This control is used to start the Personal Mover Concept, switch the light on and off, select the driving mode or check battery status. For safety, there is also a bell for warning other employees. The left grip operates the brake and a dead man's control.
The PMC uses regenerative braking and can hit 25 KPH (15.5 MPH). It can be plugged into a regular outlet for recharging, and the battery's good for 20 to 30 kilometers.
While the PMC is real--if BMW Blog is to be believed--apparently it has yet to be batch-produced and distributed. Says Kamissek, "We hope to start using it as soon as possible!"