Pages

Search This Blog

Sunday, November 27, 2011

The Basics of 3D



Did you ever wonder how 3D works?   How do images come out of the screen or move behind it?   What do the glasses do?   Why do some have colors and others look like sunglasses?

3D is fascinating because it involves trickery of the brain.

Most people have what is known as binocular vision to perceive depth and see the world in 3D.   The separation between our two eyes cause each eye to see the world from a slightly different perspective.   The brain merges these two views together. From the difference between the two images the brain can calculate the distance of each object.  So, 3D, or “stereoscopy,” refers to how your eyes and brain create the impression of a third dimension.

A simple way to understand this principle is to hold your thumb up at arms length and closing each eye alternately while looking at your thumb.    As you switch between open eyes you should see your thumb “jumping” back and forth against the background.  

You’ll notice that the angle from which you’re viewing the thumb changes and that you can see different parts of the thumb depending on which eye is open. In a sense you are seeing two different images of the thumb.

When you view the thumb with both eyes you are still seeing two images but your brain makes one image. This allows your brain to understand that the object has depth.

Now that we understand why we perceive in 3D, it follows that for any display technology to be able to trick your eyes into believing that you are viewing a 3D image, it will need to provide a slightly different image for viewing to each eye via some technological trickery.

In general, for 3D movies and TV broadcast, the left and right image as captured by a “stereoscopic” camera is projected or displayed simultaneously, then glasses or filters are used to feed your individual eyes different perspectives of the same image to create a sense of depth of the real life image.  Stereoscopic camera’s are either  two cameras mounted on a rig  or two lenses on one camera or two cameras that operate out of the same lens, but spit the image inside the camera using tiny mirrors.




There are several different types of 3D viewing systems with associated glasses:



Color filter glasses (Anaglyph)

Color filter glasses are one of the oldest methods of viewing 3D images or movies (first developed in 1853).  It is based on the idea of splitting an image into 2 by taking only the red colors of an image for the left and the blue color for the right eye.
Both sub images, which show the scene from different perspectives, are combined and displayed on the monitor or screen at the same time
Again, the system works by feeding different images into your eyes. The different color filters allow only one of the images to enter each eye, and your brain does the rest. There are two color filter systems: Red/Blue and Red/Green.   This technique, however, didn’t allow for a full range of color and had a tendency to “ghost,” or have the once-distinct images bleed into one another (not to mention it was more apt to provide headaches and nausea).

Polarizing glasses (Passive, not electronic, glasses)

This method is more commonly used in today's 3D movie projections, such as the Real D and IMAX 3D systems. The audience must wear special glasses which have two polarizing lenses which have their polarization directions adjusted to be 90 degrees different. This makes is possible that left eye sees it's picture without problems but everything sent to right eye (sent out at different polarization) seems to be black. Same applies also to right eye.      Stereo 3D film theaters use special silver-coated screens that are much better at reflecting light back to the viewing audience.  

LCD shutter glass (Active, electronic, glasses)

In the LCD shutter glass 3D display, the glasses are synced up to your television and actively open and close shutters in front of your eyes, allowing only one eye to see the screen at a time. The active shutter glasses are maintained in sync with the television set using bluetooth, infrared or radio technology.  The LCD’s can be made opaque thus acting as a shutter. The shutters move so quickly that the switching is  hardly noticeable . These shutter lenses are made possible because of the refresh rate on televisions. 3D-enabled televisions have high image refresh rates (typically 120 or 240 Hz), meaning the actual image on screen is quickly loaded and reloaded. Through the glasses, you receive one constant image instead of a flicker.   The downside is that the glasses are expensive and require batteries.


Where Stereo 3D becomes interesting is in learning how to manipulate these images on screen for creative effect. How do you make objects appear as though they are coming out of the screen towards you or make the actor appear to be in front of an object?
Simply put, your mind has a number of depth cues. These are signs that tell the brain that there is a measurable distance between objects. These have been manipulated by filmmakers for years. Focus is the easiest to understand. If something is in focus and the objects around it are out of focus then your brain can understand that there is a distance between these objects.

By altering the distance between each image we can control how far forwards and backwards they appear to be. If we move images closer together on screen it forces your eyes to focus as though they are closer, while if we move these images further apart your eyes focus as though they’ve become further away. Too much manipulation either way and the viewing experience becomes very uncomfortable. 

The eight depth cues
Humans have eight depth cues that are used by the brain to estimate the relative distance of the objects in every scene we look at. These are listed below. The first five have been used by artists, illustrators and designers for hundreds of years to simulate a 3D scene on paintings and drawings. The sixth cue is used in film and video to portray depth in moving objects. However it is the last two cues that provide the most powerful depth cues our brains use to estimate depth.

Combining depth cues
If many of these depth cues combine they can offer a very strong sense of depth. In this picture you will find perspective, lighting and shading, relative size, and occlusion which all combine to produce a very strong sense of depth in the picture.

1. Focus
When we look at a scene in front of us, we scan over the various objects in the scene and continually refocus on each object. Our brains remember how we focus and build up a memory of the relative distance of each object compared to all the others in the scene.

2. Perspective
Our brains are constantly searching for the vanishing point in every scene we see. This is the point, often on the horizon, where objects become so small they disappear altogether. Straight lines and the relative size of objects help to build a map in our minds of the relative distance of the objects in the scene

3. Occlusion
Objects at the front of a scene hide objects further back. This is occlusion. We make assumptions about the shape of the objects we see. When the shape appears broken by another object we assume the broken object is further away and behind the object causing the  breakage.

4. Lighting and shading
Light changes the brightness of objects depending of their angle relative to the light source. Objects will appear brighter on the side facing the light source and darker on the side facing away from the light source. Objects also produce shadows which darken other  bjects. Our brains can build a map of the shape, and relative position of objects in a scene from the way light falls on them and the pattern of the shadows caused.

5. Color intensity and contrast
Even on the clearest day objects appear to lose their color intensity the further away that they are in a scene. Contrast (the difference between light and dark) is also reduced in distant objects. We can build a map in our minds of the relative distance of objects from their color intensity and the level of contrast.

6. Relative movement
As we walk through a scene, close objects appear to be moving faster than distant objects. The relative movement of each object compared to others provides a very powerful cue to their relative distance. Cartoonists have used this to give an impression of 3D space in animations. Film and television producers often use relative movement to enhance a sense of depth in movies and television programs.

7. Vergence
Vergence is a general term for both divergence and convergence. If we look an objects in the far distant both our eyes are pointing forwards, parallel to each other. If we focus on an object close up, our eyes converge together. The closer the object, the more the convergence. Our brains can calculate how far away an object is from the amount of convergence our eyes need to apply to focus on the object. Film and video producers can use divergence as a trick to give the illusion that objects are further away, but this should be used sparingly because divergence is not a natural eye movement and may cause eye strain.

8. Stereopsis
Stereopsis results from binoccular vision. It is the small differences in everything we look at between the left and right eyes. Our brains calculate which objects are close and which objects are further away from these differences.  The example we used earlier of the “jumping thumb” is a demonstration of stereopsis.

Glassless Television Displays (Autostereoscopic)

Autostereoscopic is essentially any display that does not require glasses to view the image in 3D.   The Nintendo 3DS, Nintendo’s newest portable 3D gaming device, is one such device. One of its tricks is syncing a lenticular display with its forward-facing camera. This method relies on a display coated with a lenticular film. Lenticules are tiny lenses on the base side of a special film. The screen displays two sets of the same image.  By using eye recognition, it can track where the user’s face is and shift the display to accurately display 3D no matter how the user views the screens.

Autostereoscopy will develop on handheld devices before it heads to large format screens.  Other “glassless” products for 3D include mobile phones, laptops, cameras and camcorders.


Autostereoscopy relies on the use of special optical elements between the television screen and the viewer so that each eye of the viewer receives a different image thus producing the illusion of depth. This can typically be achieved in flat panel displays either using lenticular lenses or parallax barriers.


One of the downsides of both lenticular and parallax-barrier screens is if you move your head, or get too close or far away from the screen, the effect breaks down. Displays like this work reasonably well in portable devices like the Nintendo 3DS and Sony's TD10 because their screens are small, but scaling up is very expensive. 


Another issue is that both lenticular and parallax-barrier screens reduce overall image resolution, which has unpleasant consequences for the image quality of 2D footage. To compensate, any future big-screen autostereo TVs will need to have a much greater resolution than today's HD models.


3D Production Challenges
3D production techniques and the associated complexity would also require another long post.   

The stereographer is a new vocation in film, television and video games production. This person will monitor material from one or more 3D camera rigs and check that the 3D image is correctly aligned and positioned in the 3D space. The stereographer will also ensure that the 3D image is kept within the allocated depth budget throughout post-production.  


Suffice it to say it is easy to produce “bad” 3D.  Two proponents of producing “good 3D” and evangelizing on the techniques to do so are Steve Schkair and Vincent Pace.

Click on this link for an interesting video interview with Steve Schklair who talks about what difference filming the Hobbit 3D at 48 frames per second will make, new 3ality Digital technology to enable fast 3D cutting, a large 3D movie project in Russia and how 3D technology has changed since 3ality Digital’s U23D production.




Thursday, October 27, 2011

IT Leadership Conference At Interop NY 2011 - Part 1


My colleague, Kyle Knack, recently attended an IT Leadership track at Interop in NYC.
Interop is billed as the Leading Business Technology event dealing with subjects like Cloud Computing, Virtualization, Network Security, Mobility and Data Centers. 

Kyle is a network systems, storage and server expert who’s built an impressive team handling the design and deployment of our media and publishing infrastructure at National Geographic.   His thoughts from the IT Leadership conference are worth sharing.  

Professional development opportunities like this are not only educational and motivating, but they get us out of the day to day grind (albeit for a short period) and give us the ability to think about the big picture.  Below is Kyle’s summary:

Bottom of Form

 IT Leadership Conference At Interop NY 2011 - Part 1

      I had the opportunity to attend a two-day IT Leadership conference at
the Interop conference in NY early last week, hosted by a panel of
past and present Fortune 100 CIOs and IT executives.  During the
intensive workshop we were immersed in panel discussions, Q&A,
stories, trends and more.  Part 1 of my report will be a common
element of all of the speakers - value and innovation in IT.

      The seasoned vets all had one common message about what a world class
organization needs to do to be successful - bring value to the
business through IT products and IT innovation.  And more often than
not, the two go hand in hand.  So what does value in IT mean anyway ?
We defined value as the contribution IT makes to improving a companies
products and services, and thus the bottom line, above and beyond the
day to day operations.  Although that seems like a simple concept,
it's often overlooked in many organizations due to a disconnect
between the business owners (and ultimately the CEO/President/CFO) and the
IT management.  As an IT executive, just ask yourself this question - What have
you done in your tenure (besides adding servers, network, etc) to grow the
front-end business, suggest and bring new products to market, or
otherwise bring new value to the organization?

      So what does it take to produce business value in IT ?  Organization
and innovation.  Organization is an important factor, which we'll
cover in part 2.  So let's dive into innovation.  We all know today's
IT landscape - too much work, too little time, not enough resources,
not enough budget, the list is endless.  But let's pretend none of
those exist.  At that point, much like say Shell Oil or Boeing,
innovation becomes the key consumer of resources.  Now in reality, 95%
of IT organizations have to worry about those afore mentioned factors,
but without focusing some dedicated resources on innovation and
problem solving they will be forever treading water.  And innovation
is two-fold.  It helps the IT organization, by having a dedicated
staff to address key issues internally without interrupting day to day
operations.  But it also helps the business by having key staff
focused on new and up-and-coming technologies, giving the organization
a valuable advantage above its competitors.  And there's that magical
word - value.

      Now bear in mind, not all organizations are large enough to go off
and create a whole unit tasked with R&D.  But that doesn't mean there
isn't still opportunity to get into the mindset of growing the
business from the backend.  It could be as simple as bi-weekly tech
sessions, where IT staff meet with business owners to understand their
challenges and share their own ideas.  Or it could be a more elaborate
program where certain IT staff dedicate some of their time each
day/week/month to non-operational tasks.

      The takeaway here is we should start considering the value we receive
from our technology, where we can improve the business through technology,
and how IT in general can help drive the front end of the business to reach our
organizational goals.
--
Kyle Knack
Director, Infrastructure Systems
National Geographic Global Media

Sunday, October 16, 2011

Akamai Edge Customer conference - Innovating at the Edge


I attended the Akamai Edge customer conference last week for 2 days out of 3.   The conference is an annual gathering of Akamai’s customers focused on the challenges and best practices for business innovation in today’s hyperconnected world. 
 
Interactive technical sessions and case studies explored strategies for tackling application migration to the cloud, mobile site optimization and performance, security, and the consumption of rich media across any device.   There were over 800 people in attendance with over 200 international customers. 

Akamai is a Content Delivery Network (CDN) provider who operate a global “edge” network, meaning that their network extends close to the “edge”.  The “edge” means relatively close to the end customer/consumer for web or mobile site and application consumption.  Since the network extends close to the “edge” and employs caching, the performance is high, meaning fast page or application load times and high reliability.   Many web sites and mobile applications are “powered” by Akamai.  The Akamai network is used by many content providers to deliver their traffic to their consumers.

From Wikipedia, “Akamai provides a service to companies that have content on the Internet (Akamai's customers), to more efficiently deliver this content to users browsing the Web and downloading content. Akamai does this by transparently mirroring content—sometimes all site content including HTML, CSS, and software downloads, and sometimes just media objects such as audio, graphics, animation, and video—from customer servers. Though the domain name (but not subdomain) is the same, the IP address points to an Akamai server or another user's machine that Akamai is using as a server rather than the customer's server. The Akamai server is automatically picked depending on the type of content and the user's network location.

The benefit is that users can receive content from whichever Akamai server or user is close to them or has a good connection, leading to faster download times and less vulnerability to network congestion or outages.

In addition to content caching, Akamai provides services which accelerate dynamic and personalized content, J2EE-compliant applications, and streaming media to the extent that such services frame a localized perspective.”



Some fascinating statistics were quoted during the conference.  According to Johan Wibergh, EVP & Head of Networks for Ericsson, today 5 billion people own mobile phones and in 2020 there will be 50 Billion connected devices.   That number is staggering considering that today there are approximately 7 Billion people in the world !   As a corollary, he predicts that data over mobile will increase 15 times in the next 5 years.  Others, including the Economist, predict less growth by 2020, but even 10 Billion connected devices is staggering.   

What does this mean for content providers?   Growth! And undoubtedly the quality and curation of that content will differentiate providers.   Look at the launch of  Apple’s Newstand last week via iOS 5.   Even with Apple’s Appstore “toll” of 30%, magazine publishers rushed to launch their interactive magazine subscriptions on day one.  As an aside, National Geographic, The Daily, Wired and Reader’s Digest were picked as featured U.S. publications inside Newsstand for the Apple launch.   

Additional statistics and forecasts related to Akamai’s network include:
                                                        2006               2011               2016
IP Address access daily:               200 M             600 M             1,620 M
Content delivered daily:                2 PB               50 PB             1,500 PB
Mobile traffic delivered daily:        3 TB               520 TB           91,000 TB
Commerce transactions daily:      $140 M           $550 M           $1,650 M
Peak attack traffic (hacking)         24 Gbps        200 Gbps      1,600 Gbps


This is impressive growth, especially the predictions for mobile growth.  It should also be noted that the increase in dollar value of ecommerce transactions and the “hacking” traffic, point to the critical need for robust security.  It will be more important in time as there will be even more at stake.

As a side note, of Akamai’s 3700 customers, half have mobile optimized websites.   Of the Fortune 50 companies, 40% have mobile optimized websites.   Consumers will not use specific websites on a smartphone if it’s a poor experience.

Jonathan Miller, the Chief Digital Officer, for NewsCorp, gave several interesting presentations.   At one point, he commented on video mobile growth and he offered this interesting perspective, “ Once there is a mobile digital series “hit” the way Angry Birds is a hit, the TV development model will change.”

Tom Leighton, the Chief Scientist for Akamai, gave a live demo of Akamai’s TV Everywhere solution.   TV Everywhere is a solution for cable providers and others, to allow viewing on, for example, a large screen TV, with the ability to pause and continue watching on another device, such as an iPad.

Traditional CDN providers like Akamai, are extending their service offering with features and functionality to facilitate application and multi-platform delivery.   There was much discussion on migrating to the cloud, security, HTML5 and mobile.  
Akamai’s CEO, Paul Sagan, said “The media revolution will not be televised, it will be mobilized”.  Once again, the often repeated theme has to do with mobile and connected device growth.  

Akamai unveiled a new customer portal, which will be released early 1Q12.   The portal is an exciting development as it puts more power and control in the hands of customers for turning up new sites, configurations, being able to set up and monitor live events, etc.   This will reduce reliance on Akamai’s professional services for relatively straightforward configurations.

Jonathan Miller, Chief Digital Officer, of NewsCorp, also mentioned that NewsCorp websites have more hits coming from Social Media than via search.   Miller indicated that NewsCorp is pursuing new opportunities to create value: better targeting, new commerce models and OTT distribution.

Note: OTT or Over-the-Top distribution is a general term for service that you utilize over a network that is not offered by that network operator.  It's often referred to as "over-the-top" because these services ride on top of the service you already get and don't require any business or technology affiliations with your network operator.


Many of the sessions were highly technical having to do with utilizing Akamai’s HD Network for streaming video, video player design, security, and streaming to Android, Apple iOS, Gaming consoles, etc.  

A number of vendors were on-site to discuss and demo their product offerings.   All were Akamai partners including, IBM, Rackspace hosting, Kit Digital, Fry, Terremark, Adobe, BMC Software, Brightcove, Digital Rapids, Elemental, Envivio, Internet Broadcasting, Jive, Ooyala, Origin Digital, Riverbed, Blaze, CSG, Compuware, Cybersource, Exceda, Flexera, Haivision, Harmonic, Hybris Software, Invodo, Kaltura, Motionpoint, Nextstreaming, Onesite, Signiant and Unicorn Media.



All in all, it was time well spent....





Monday, October 10, 2011

Adobe Max 2011

Adobe Max 2011 is the annual Adobe conference geared towards software developers involved with  Adobe consumer and professional products. These days however, it is also a venue for key Adobe product announcements and business collaboration.  It was held in the Los Angeles Convention Center from October 1st thru the 5th.  My colleague, Dave Smith, attended and prepared the highlights report below.  Thank you Dave !!


Adobe is making big strides these days with authoring and publishing tools for tablets (ala Apple iPad) and Dave has covered alot of that space below.  As an aside, I'm curious if and how Adobe will take advantage of Apples lapse in the professional video editing market (Final Cut X) with further advances in their Adobe Premiere Pro editing software.  We'll keep our eye on video developments, but in the meantime for those who rely on Adobe products or simply enjoy using them, please read Dave's report below:


Adobe Creative Cloud
There were a number of announcements made during the keynote, starting with the “Adobe Creative Cloud” in parallel with a new set of “Adobe Touch Apps” for content creation on tablet devices. These apps bring professional-level creativity to millions of tablet users – both consumers and professionals – and utilize hosted cloud-based services to share files, view them across devices or transfer work into various Adobe software for further refinement. Collaboration was a big focus on Adobe’s move to expand their toolset into a cloud-based framework. When the product rolls out in 2012 it will include 20GB of cloud storage for each user.
"Adobe Creative Cloud reinvents creative expression by enabling a new generation of services for creativity and publishing, that embrace touch interaction to re-imagine how individuals interact with creative tools and build deeper social connections between creatives around the world,” said Kevin Lynch, chief technology officer at Adobe. "The move to the Creative Cloud is a major component in the transformation of Adobe.”


Adobe Creative Cloud will include the following:
      Applications – Access to the portfolio of Adobe Creative Suite tools as well as the six newly announced Adobe Touch Apps. The offering will include industry-leading desktop tools such as Photoshop, InDesign, Illustrator, Dreamweaver, Premiere Pro, After Effects and innovative new tools such as Adobe Edge and Muse.
      Services – Key Adobe Digital Publishing Suite technologies, for delivering interactive publications on tablets; a tier of Adobe Business Catalyst, for building and managing websites; and new design services, such as the ability to use cloud-based fonts for website design, via technology acquired by Adobe through its newly announced acquisition of Typekit Inc.
      Community – Capabilities that enable users to present and share their work and ideas with peers around the world and a forum for feedback and inspiration that will foster connections between creative people.  Adobe Creative Cloud will become a focal point during the creative process.

Adobe Touch Apps
“Adobe Touch Apps deliver high-impact creative expression to anyone who has a tablet,” said Kevin Lynch, chief technology officer, Adobe. “With Adobe imaging magic coming to tablet devices, new apps like Photoshop Touch will open your mind about the potential of the touch interface for creativity and demonstrate that tablets are an essential part of anyone’s creative arsenal.”
Anticipating the way people are integrating tablets into their everyday lives, the new family of Adobe Touch Apps will allow users to create content on tablet devices freeing them from the desktop or laptop computer. The new Adobe Touch Apps include:
         Adobe Collage A collaboration tool which lets creative types mix images, text and graphics and immediately transfer to the cloud, providing easy access in Photoshop or sharing with others. Features include importing of images, customizable pen types for drawing, adding text, and applying color themes. The canvas grows automatically to accommodate the space needed as assets are added.
         Adobe Debut allows the presentation of design ideas to, well virtually anywhere. The app opens tablet-compatible versions of Creative Suite files for convenient viewing on the tablet, including Photoshop layers and Illustrator art boards. Feedback can be provided using a markup pen tool to add annotations on top of the work.
         Adobe Ideas is a vector-based drawing tool. Using a stylus or finger, strokes appear smooth at any zoom level. Starting with a blank canvas, users can choose color themes, and pull in tablet-compatible image files that can be controlled as separate layers. Finished results are easily accessed in Adobe Illustrator or Photoshop via their cloud integration.
         Adobe Kuler makes it easy to generate color themes which can be exported as color swatches for Adobe Creative Suite projects. Social engagement in the community is enhanced by rating and commenting on themes.
         Adobe Photoshop Touch contains core Photoshop features. With simple finger gestures, users can combine multiple photos into layered images, make popular edits and apply professional effects. The tablet-exclusive Scribble Selection Tool allows users to easily extract objects in an image by simply scribbling on what to keep and then what to remove. Additionally, the app helps users quickly find images, share creations, and view comments through integration with Facebook and Google Search. Using the syncing capabilities that are a component of Adobe Creative Cloud, files can be opened in Adobe Photoshop.
         Adobe Proto enables the development of interactive wireframes and prototypes for websites and mobile apps on a tablet. Ideas are communicated and shared with teams and clients using a touch-based interface. Gestures quickly express a design concept, explain website structure or demonstrate interactivity. The wireframe or prototype then can be exported as industry standard HTML, CSS and JavaScript, and shared in popular browsers for immediate review and approval.
Adobe Touch Apps build on the launch of Adobe Carousel, which provides access to your entire photo library across your tablets, smartphones and desktops.
Digital Publishing Suite — Single Edition
Now small design studios and freelance designers can leverage Adobe’s DPS and publish their content to the iPad for a one-time fee of $395.

Other Announcements
Adobe has acquired Typekit, a service that allows you to choose from, and easily incorporate, hundreds of fonts into your web projects. This service will be included in Adobe Creative Cloud. This could add significant design enhancements to web sites and digital publications.

Adobe announced their plans to acquire PhoneGap, a development platform which lets you build mobile applications in standard web technologies yet leverage access to native APIs across various devices and platforms.

Adobe also announced that the WoodWing publishing system would be using the DPS platform going forward (to date Woodwing has developed it’s own method for publishing content to tablet devices). WoodWing will integrate their workflows and standardize on Adobe’s DPS tools and also become a reseller for Adobe Digital Publishing Suite.

HTML

Adobe has been busy in the HTML space -- writing some specs for “CSS Regions” and “Exclusions” which they have proposed to the WSC, as well as contributing code to the WebKit browser engine which is used in Safari and Chrome:

Key highlights of CSS Regions and Exclusions include:

          Story threading — allows content to flow in multiple disjointed boxes expressed in CSS and HTML, making it possible to express more complex, magazine-style threaded layouts, including pull quotes and sidebars.
          Region styling — allows content to be styled based on the region it flows into. For example, the first few lines that fit into the first region of an article may be displayed with a different color or font, or headers flowing in a particular region may have a different background color or size. Region styling is not currently implemented in the CSS Regions prototype.
          Arbitrary content shapes and exclusions — allows content to fit into arbitrary shapes (not just rectangular boxes) or to flow around complex shapes.


Flash

The new releases of Adobe Flash Player 11 and Adobe AIR 3 enable the next generation of immersive application experiences for gaming, rich media, and data-driven apps. There were several demos of advances in rendering which allow rich gaming experiences which were previously confined to the console now moving to the browser.

Adobe AIR
Native extensions for Adobe AIR provide developers with easy access to device-specific libraries and features. Upcoming Flex 4.6 and Flash Builder 4.6 releases will provide new components, access to the latest platform and device capabilities, and native install experiences.

Adobe Digital Enterprise Platform (ADEP)
ADEP software (formerly Adobe LiveCycle and CRX) is a “composite content application” platform. Much of the underlying technology is not new, but has been assembled in a collection of components which together form the building blocks which can be assembled in various ways based on the solution needed.

One of the key components is CRX which is an object-based data store and content repository based on the JCR 2.0 specs. Combined with CQ5, which contains a workflow engine, this platform offers a robust and extensible solution to many document management and publishing needs. The platform includes the following standard interfaces:

         Java Content Repository API 1.0 (JSR-170) and 2.0 (JSR-283)
         Content Management Interoperability Services (CMIS)
         WebDAV, including versioning, access control, and search
         Common Internet File System (CIFS) and Server Message Block (SMB) to act as network file share
         RESTful web API to build JavaScript-based content applications
         LDAP and JAAS for user provisioning
         Remoting with RMI and HTTP over DavEx
         Mounted content from third-party repositories via the native interface, for example, Microsoft SharePoint

Sneak Peeks
Adobe revealed several new technologies being developed in their labs -- these features may or may not ever be included in shipping software, but they give some insight into the talent at work within their engineering group.

      Local Layer Ordering: A Photoshop plug-in which provides a pointer to specify which part of an image should be layered above/below another portion of the image.


      DeBlurring: Another plug-in took blurred photos, calculated the motion of the camera movement and “reversed” the motion, resulting in a crystal clear photo.

      RubbaDub: A developer from Japan created a bit of syncing software that lets you re-voice someone, and then automatically syncs it with the actors lip movements.



      Another plugin took crowd-sourced video footage from various cell phones, YouTube, etc. and automatically synced up all the tracks, no matter the quality or length of the track.


Video Meshes: But the most impressive tool was something that let you manipulate the 3D space of a piece of video in Premiere. This allows you to, for example, change the focal length of the virtual lens, or even change where a character is placed in the shot.


      
      Monocle: A sophisticated profiling application which provides telemetry data for Flex applications, so developers can quickly identify performance issues with their applications.



      Liquid Layout: This comes from InDesign and will likely be part of the Digital Publishing Suite. It provides for a layout to automatically resize and reflow based on the size of the container, which could be a viable solution for publishing the same document to various tablet devices of different sizes.


       Smart Debugging (aka “How did my code get here?”). This is a debugging tool based on a recorded trace, letting you step backwards as well as forwards through code.



       Near-field Communications for AIR. This demo showed near-field communications for Adobe AIR for mobile. We are most familiar with this for applications like payments, where you wave your mobile at a sensor, but it has plenty of potential for other scenarios, such as looking up product details without having to scan a barcode.



      Pixel Nuggets: The idea of this one is to identify “like” images by analyzing a collection of photos and searching for commonality. For example, you could select a color or shape and it will find all images which match that color and/or shape. It does a pretty good job of recognizing faces as well.