Robinson Piramuthu

Robinson Piramuthu
Research Scientist
Biography

Robinson Piramuthu joined the eBay New Product Development organization in February 2016 where he is currently the head of “AI - Computer Vision”. He has over 20 years of experience in computer vision.

Robinson joined eBay Research Labs in 2011 and was later the head of the computer vision team, which specialized in computer vision research for visual commerce. This includes large scale visual search, coarse and fine-grained visual recognition, 3D cues from 2D images, figure-ground segmentation and deep learning for vision, among others. He was leading the computer vision team in the eBay Cognitive Computing Group in 2015.

He received his PhD in Electrical Engineering and Computer Science from the University of Michigan in 2000 specializing in information theory and statistical image processing, after which he worked at KLA-Tencor for 6 years with focus on computer vision for semiconductor inspection equipment. He worked at FlashFoto Inc., a startup company, for 5 years on topics such as visual saliency, auto-cropping, background removal, human/skin/hair segmentation in various poses from consumer grade color pictures. He also has MS in control theory from the University of Florida, specializing in robust and nonlinear control systems.

Robinson has technical publications at top conferences such as CVPR, KDD, WSDM, WACV, ICIP. He co-organized the workshop on Large Scale Visual Commerce at ICCV ’13 Sydney and at CVPR ’15. He has 27 patents issued, and over 25 under review or preparation.

 
Publications
ICCV, December, 2015

HD-CNN: Hierarchical Deep Convolutional Neural Network for Image Classification

Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis DeCoste, Wei Di, Yizhou Yu

In image classification, visual separability between different object categories is highly uneven, and some categories are more difficult to distinguish than others. Such difficult categories demand more dedicated classifiers. However, existing deep convolutional neural networks (CNN) are trained as flat N-way classifiers, and few efforts have been made to leverage the hierarchical structure of categories.

In this paper, we introduce hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category hierarchy. An HD-CNN separates easy classes using a coarse category classifier while distinguishing difficult classes using fine category classifiers. During HD-CNN training, component-wise pretraining is followed by global finetuning with a multinomial logistic loss regularized by a coarse category consistency term.

In addition, conditional executions of fine category classifiers and layer parameter compression make HD-CNNs scalable for large-scale visual recognition. We achieve state-of-the-art results on both CIFAR100 and large-scale ImageNet 1000-class benchmark datasets. In our experiments, we build up three different HD-CNNs and they lower the top-1 error of the standard CNNs by 2.65%, 3.1% and 1.1%, respectively.

ICIP, September, 2015

Mine the Fine: Fine-Grained Fragment Discovery

M. Hadi Kiapour, Wei Di, Vignesh Jagadeesh, Robinson Piramuthu

While discriminative visual element mining has been introduced before, in this paper we present an approach that requires minimal annotation in both training and test time. Given only a bounding box localization of the foreground objects, our approach automatically transforms the input images into a roughly-aligned pose space and discovers the most discriminative visual fragments for each category.

These fragments are then used to learn robust classifiers that discriminate between very similar categories under challenging conditions such as large variations in pose or habitats. The minimal required input, is a critical characteristic that enables our approach to generalize over visual domains where expert knowledge is not readily available.

Moreover, our approach takes advantage of deep networks that are targeted towards fine-grained classification.It learns mid-level representations that are specific to a category and generalize well across the category instances at the same time.

Our evaluations demonstrate that the automatically learned representation based on discriminative fragments, significantly outperforms globally extracted deep features in classification accuracy.

ICVS, July, 2015

Efficient Media Retrieval from Non-Cooperative Queries

Kevin Shih, Wei Di, Vignesh Jagadeesh, Robinson Piramuthu

Text is ubiquitous in the artificial world and easily attainable when it comes to book title and author names. Using the images from the book cover set from the Stanford Mobile Visual Search dataset and additional book covers and metadata from openlibrary.org, we construct a large scale book cover retrieval dataset, complete with 100K distractor covers and title and author strings for each.

Because our query images are poorly conditioned for clean text extraction, we propose a method for extracting a matching noisy and erroneous OCR readings and matching it against clean author and book title strings in a standard document look-up problem setup.

Finally, we demonstrate how to use this text-matching as a feature in conjunction with popular retrieval features such as VLAD using a simple learning setup to achieve significant improvements in retrieval accuracy over that of either VLAD or the text alone.

CVPR 2014

Region-based Discriminative Feature Pooling for Scene Text Recognition

Chen Yu Lee, Anurag Bhardwaj, Wei Di, Vignesh Jagadeesh, Robinson Piramuthu

We present a new feature representation method for scene text recognition problem, particularly focusing on improving scene character recognition. Many existing methods rely on Histogram of Oriented Gradient (HOG) or part based models, which do not span the feature space well for characters in natural scene images, especially given large variation in fonts with cluttered backgrounds.

In this work, we propose a discriminative feature pooling method that automatically learns the most informative sub-regions of each scene character within a multi-class classification framework, whereas each sub-region seamlessly integrates a set of low-level image features through integral images.

The proposed feature representation is compact, computationally efficient, and able to effectively model distinctive spatial structures of each individual character class. Extensive experiments conducted on challenging datasets (Chars74K, ICDAR’03, ICDAR’11, SVT) show that our method significantly outperforms existing methods on scene character classification and scene text recognition tasks.

Keywords
Categories
ICML 2014 workshop on New Learning Models and Frameworks for BigData

Geometric VLAD for Large Scale Image Search

Zixuan Wang, Wei Di, Anurag Bhardwaj, Vignesh Jagadeesh, Robinson Piramuthu

We present a novel compact image descriptor for large scale image search. Our proposed descriptor - Geometric VLAD (gVLAD) is an extension of VLAD (Vector of locally Aggregated Descriptors) that incorporates weak geometry information into the VLAD framework.

The proposed geometry cues are derived as a membership function over keypoint angles which contain evident and informative information but yet often discarded. A principled technique for learning the membership function by clustering angles is also presented.

Further, to address the overhead of iterative codebook training over real-time datasets, a novel codebook adaptation strategy is outlined. Finally, we demonstrate the efficacy of proposed gVLAD based retrieval framework where we achieve more than 15% improvement in mAP over existing benchmarks.

Keywords
Categories
arXiv, June, 2014

When relevance is not Enough: Promoting Visual Attractiveness for Fashion E-commerce

Wei Di, Anurag Bhardwaj, Vignesh Jagadeesh, Robinson Piramuthu, Elizabeth Churchill

Fashion, and especially apparel, is the fastest-growing category in online shopping. As consumers requires sensory experience especially for apparel goods for which their appearance matters most, images play a key role not only in conveying crucial information that is hard to express in text, but also in affecting consumer's attitude and emotion towards the product.

However, research related to e-commerce product image has mostly focused on quality at perceptual level, but not the quality of content, and the way of presenting.This study aims to address the effectiveness of types of image in showcasing fashion apparel in terms of its attractiveness, i.e. the ability to draw consumer's attention, interest, and in return their engagement.

We apply advanced vision technique to quantize attractiveness using three common display types in fashion filed, i.e. human model, mannequin, and flat. We perform two-stage study by starting with large scale behavior data from real online market, then moving to well designed user experiment to further deepen our understandings on consumer's reasoning logic behind the action.

We propose a Fisher noncentral hypergeometric distribution based user choice model to quantitatively evaluate user's preference. Further, we investigate the potentials to leverage visual impact for a better search that caters to user's preference. A visual attractiveness based re-ranking model that incorporates both presentation efficacy and user preference is proposed. We show quantitative improvement by promoting visual attractiveness into search on top of relevance.

Keywords
Categories
Proceedings of IEEE/EURASIP Workshop on Nonlinear Signal and Image Processing, Mackinac Island, Michigan, September, 1997

A method for ECT image reconstruction with uncertain MRI side information using asymptotic marginalization

Alfred O Hero III, Robinson Piramuthu

In [1] a methodology for incorporating extracted MRI anatomical boundary information into penalized likelihood (PL) ECT image reconstructions and tracer uptake estimation was proposed. This methodology used quadratic penalty based on Gibbs weights which enforced smoothness constraints everywhere in the image except across the MRI-extracted boundary of the ROI.

When high quality estimates of the anatomical boundary are available and MRI and ECT images are perfectly registered, the performance of this method was shown to be very close to that attainable using ideal side information, i.e. noiseless anatomical boundary estimates.

However when the variance of the MRI-extracted boundary estimates becomes significant this penalty function method performs poorly. We give a modified Gibbs penalty function implemented with a set of averaged Gibbs weights, where the averaging is performed with respect to a limiting form of the posterior distribution of the MRI boundary parameters.

Keywords
Categories
IEEE International Conference on Image Processing (ICIP), 2014

Cascaded Sparse Color-Localized Matching for Logo Retrieval

Rohit Pandey, Wei Di, Vignesh Jagadeesh, Robinson Piramuthu, Anurag Bhardwaj

We present a new feature representation method for scene text recognition problem, particularly focusing on improving scene character recognition. Many existing methods rely on Histogram of Oriented Gradient (HOG) or part-based models, which do not span the feature space well for characters in natural scene images, especially given large variation in fonts with cluttered backgrounds.

In this work, we propose a discriminative feature pooling method that automatically learns the most informative sub-regions of each scene character within a multi-class classification framework, whereas each sub-region seamlessly integrates a set of low-level image features through integral images.

The proposed feature representation is compact, computationally efficient, and able to effectively model distinctive spatial structures of each individual character class. Extensive experiments conducted on challenging datasets (Chars74K, ICDAR’03, ICDAR’11, SVT) show that our method significantly outperforms existing methods on scene character classification and scene text recognition tasks.

Keywords
Categories
International Symposium on Electronic Imaging Symposium, February 2016

Im2Fit: Fast 3D Model Fitting and Anthropometrics using Single Consumer Depth Camera and Synthetic Data

Qiaosong Wang, Vignesh Jagadeesh, Bryan Ressler, Robinson Piramuthu

Recent advances in consumer depth sensors have created many opportunities for human body measurement and modeling. Estimation of 3D body shape is particularly useful for fashion e-commerce applications such as virtual try-on or fit personalization.

In this paper, we propose a method for capturing accurate human body shape and anthropometrics from a single consumer grade depth sensor. We first generate a large dataset of synthetic 3D human body models using real-world body size distributions.

Next, we estimate key body measurements from a single monocular depth image. We combine body measurement estimates with local geometry features around key joint positions to form a robust multi-dimensional feature vector.

This allows us to conduct a fast nearest-neighbor search to every sample in the dataset and return the closest one. Compared to existing methods, our approach is able to predict accurate full body parameters from a partial view using measurement parameters learned from the synthetic dataset.

Furthermore, our system is capable of generating 3D human mesh models in real-time, which is significantly faster than methods which attempt to model shape and pose deformations.

To validate the efficiency and applicability of our system, we collected a dataset that contains frontal and back scans of 83 clothed people with ground truth height and weight. Experiments on real-world dataset show that the proposed method can achieve real-time performance with competing results achieving an average error of 1.9 cm in estimated measurements.

CVPR, June, 2015

ConceptLearner: Discovering Visual Concepts from Weakly Labeled Image Collections

Bolei Zhou, Vignesh Jagadeesh, Robinson Piramuthu
Discovering visual knowledge from weakly labeled data are crucial to scale up computer vision recognition system, since it is expensive to obtain fully labeled data for a large number of concept categories while the weakly labeled data could be collected from the Internet cheaply and massively.
 
In this paper we proposes a scalable approach to discover visual concepts from weakly labeled image collections, with thousands of visual concept detectors learned. Then we show that the learned detectors could be applied to recognize concepts at image-level and to detect concepts at image region-level accurately.
 
Under domain-selected supervision, we further evaluate the learned concepts for scene recognition on SUN database and for object detection on Pascal VOC 2007. It shows promising performance compared to the fully supervised and weakly supervised methods.
 
WACV, March, 2016

Fashion Apparel Detection: The Role of Deep Convolutional Neural Network and Pose-dependent Priors

Kota Hara, Vignesh Jagadeesh, Robinson Piramuthu

In this work, we propose and address a new computer vision task, which we call fashion item detection, where the aim is to detect various fashion items a person in the image is wearing or carrying. The types of fashion items we consider in this work include hat, glasses, bag, pants, shoes and so on.

The detection of fashion items can be an important first step of various e-commerce applications for fashion industry. Our method is based on state-of-the-art object detection method which combines object proposal methods with a Deep Convolutional Neural Network.

Since the locations of fashion items are in strong correlation with the locations of body joints positions, we incorporate contextual information from body poses in order to improve the detection performance. Through the experiments, we demonstrate the effectiveness of the proposed method.

ICIP, Chicago, October, 1998

Side information averaging method for PML emission tomography

Robinson Piramuthu, Alfred O Hero III

The authors previously presented a methodology for incorporating perfect extracted MRI anatomical boundary estimates to improve the performance of penalized likelihood (PL) emission computed tomography (ECT) image reconstruction and ECT tracer uptake estimation. This technique used a spatially variant quadratic Gibbs penalty which enforced smoothness everywhere in the ECT image except across the MRI-extracted boundary of the ROI.

When high quality estimates of the anatomical boundary are available and MRI and ECT images are perfectly registered, the performance of this Gibbs penalty method is very close to that attainable using perfect side information, i.e., an errorless anatomical boundary estimate. However when the variance of the MRI-extracted boundary estimate becomes significant this method performs poorly. Here we present a modified Gibbs penalty function which accounts for errors in side information based on an asymptotic min-max robustness approach.

The resulting penalty is implemented with a set of averaged Gibbs weights where the averaging is performed with respect to a limiting form of the min-max induced posterior distribution of the MRI boundary parameters. Examples are presented for tracer uptake estimation using the SAGE version of the EM algorithm and various parameterizations of the anatomical boundaries.

Keywords
Categories
ICASSP, Seattle, vol. 5, pp. 2865-2868, May, 1998

Penalized maximum likelihood image reconstruction with min-max incorporation of noisy side information

Robinson Piramuthu, Alfred O Hero III

A method for incorporating anatomical MRI boundary side information into penalized maximum likelihood (PML) emission computed tomography (ECT) image reconstructions using a set of averaged Gibbs weights was proposed by Hero and Piramuthu (see Proc. of IEEE/EURASIP Workshop on Nonlinear Signal and Image Processing, 1997).

A quadratic penalty based on Gibbs weights was used to enforce smoothness constraints everywhere in the image except across the estimated boundary of the ROI.

In this methodology, a limiting form of the posterior distribution of the MRI boundary parameters was used to average the Gibbs weights obtained by Titus, Hero and Fessler (see IEEE Int. Conf. on Image Processing, vol.2, Laussane, 1996).

There is an improvement in performance over the method proposed by Titus et al., when the variance of boundary estimates from the MRI data becomes significant. Here, we present the empirical performance analysis of the proposed method of averaged Gibbs weights.

Keywords
Categories
IEEE Transactions on Information Theory, vol. 45, no. 3, pp. 920-938, April 1999

Minimax emission computed tomography using high resolution anatomical side information and B-spline models

Alfred O. Hero III, Robinson Piramuthu, Jeffrey A. Fessler, Stephen R.Titus

In this paper a minimax methodology is presented for combining information from two imaging modalities having different intrinsic spatial resolutions. The focus application is emission computed tomography (ECT), a low-resolution modality for reconstruction of radionuclide tracer density, when supplemented by high-resolution anatomical boundary information extracted from a magnetic resonance image (MRI) of the same imaging volume.

The MRI boundary within the two-dimensional (2-D) slice of interest is parameterized by a closed planar curve. The Cramer–Rao (CR) lower bound is used to analyze estimation errors for different boundary shapes. Under a spatially inhomogeneous Gibbs field model for the tracer density a representation for the minimax MRI-enhanced tracer density estimator is obtained. It is shown that the estimator is asymptotically equivalent to a penalized maximum likelihood (PML) estimator with resolution selective Gibbs penalty.

Quantitative comparisons are presented using the iterative space alternating generalized expectation maximization (SAGE-EM) algorithm to implement the PML estimator with and without minimax weight averaging.

Keywords
Categories
Nuclear Science Symposium and Medical Imaging Conference, Lyon, France, October 2000

Performance of parametric shape estimators for 2D and 3D imaging systems

Robinson Piramuthu, Alfred O Hero III

Presents Cramer-Rao (CR) bounds on error covariance for 2D and 3D parametric shape estimation. The motivation for this paper is ECT image reconstruction and uptake estimation with side information corresponding to organ boundaries extracted from high resolution MRI or CT.

It is important to understand the fundamental limitations on boundary estimation error covariance so as to gauge the utility of such side information. The authors present asymptotic forms of the Fisher information matrix for estimating 2D and 3D boundaries under a B-spline polar shape parameterization.

They show that circular (2D) and spherical (3D) shapes are the easiest to estimate in the sense of yielding maximum Fisher information. They also study the worst case shapes under near circularity and near sphericity constraints. Finally, a simulation is presented to illustrate the tightness of the CR bound for a simple 3D shape estimator utilizing edge filtering.

Keywords
Categories
KDD-2013

Palette Power: Enabling Visual Search through Colors

Anurag Bhardwaj, Atish DasSarma, Wei Di, Raffay Hamid, Robinson Piramuthu, Neel Sundaresan

xplosion of mobile devices with cameras, online search has moved beyond text to other modalities like images, voice, and writing. For many applications like Fashion, image-based search offers a compelling interface as compared to text forms by better capturing the visual attributes.

In this paper we present a simple and fast search algorithm that uses color as the main feature for building visual search. We show that low level cues such as color can be used to quantify image similarity and also to discriminate among products with different visual appearances.

We demonstrate the effectiveness of our approach through a mobile shopping application (eBay Fashion App available at https://itunes.apple.com/us/app/ebay-fashion/id378358380?mt=8 and eBay image swatch is the feature indexing millions of real world fashion images).

Our approach outperforms several other state-of-the-art image retrieval algorithms for large scale image data.

To appear in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Mobile Vision Workshop, 2013.

Style Finder: Fine-Grained Clothing Style Recognition and Retrieval

Wei Di, Catherine Wah, Anurag Bhardwaj, Robinson Piramuthu, Neel Sundaresan

With the rapid proliferation of smartphones and tablet computers, search has moved beyond text to other modalities like images and voice. For many applications like Fashion, visual search offers a compelling interface that can capture stylistic visual elements beyond color and pattern that cannot be as easily described using text.

However, extracting and matching such attributes remains an extremely challenging task due to high variability and deformability of clothing items. In this paper, we propose a fine-grained learning model and multimedia retrieval framework to address this problem.

First, an attribute vocabulary is constructed using human annotations obtained on a novel fine-grained clothing dataset. This vocabulary is then used to train a fine-grained visual recognition system for clothing styles.

We report benchmark recognition and retrieval results on Women's Fashion Coat Dataset and illustrate potential mobile applications for attribute-based multimedia retrieval of clothing items and image annotation.

WSDM, 2014

Is a picture really worth a thousand words?: - on the role of images in e-commerce

Wei Di, Neel Sundaresan, Anurag Bhardwaj, Robinson Piramuthu

In online peer-to-peer commerce places where physical examination of the goods is infeasible, textual descriptions, images of the products, reputation of the participants, play key roles. Visual image is a powerful channel to convey crucial information towards e-shoppers and influence their choice.

In this paper, we investigate a well-known online marketplace where over millions of products change hands and most are described with the help of one or more images. We present a systematic data mining and knowledge discovery approach that aims to quantitatively dissect the role of images in e-commerce in great detail. Our goal is two-fold.

First, we aim to get a thorough understanding of impact of images across various dimensions: product categories, user segments, conversion rate. We present quantitative evaluation of the influence of images and show how to leverage different image aspects, such as quantity and quality, to effectively raise sale. Second, we study interaction of image data with other selling dimensions by jointly modeling them with user behavior data.

Results suggest that "watch" behavior encodes complex signals combining both attention and hesitation from buyer, in which image still holds an important role when compared to other selling variables, especially for products for which appearance is important. We conclude on how these findings can benefit sellers in a high competitive online e-commerce market.

Keywords
Categories
arXiv, May, 2014

Enhancing Visual Fashion Recommendations with Users in the Loop

Anurag Bhardwaj, Vignesh Jagadeesh, Wei Di, Robinson Piramuthu, Elizabeth Churchill

We describe a completely automated large scale visual recommendation system for fashion. Existing approaches have primarily relied on purely computational models to solving this problem that ignore the role of users in the system.

In this paper, we propose to overcome this limitation by incorporating a user-centric design of visual fashion recommendations. Specifically, we propose a technique that augments 'user preferences' in models by exploiting elasticity in fashion choices. We further design a user study on these choices and gather results from the 'wisdom of crowd' for deeper analysis.

Our key insights learnt through these results suggest that fashion preferences when constrained to a particular class, contain important behavioral signals that are often ignored in recommendation design.

Further, presence of such classes also reflect strong correlations to visual perception which can be utilized to provide aesthetically pleasing user experiences. Finally, we illustrate that user approval of visual fashion recommendations can be substantially improved by carefully incorporating these user-centric feedback into the system framework.

KDD 2014

Large Scale Visual Recommendations From Street Fashion Images

Vignesh Jagadeesh, Robinson Piramuthu, Anurag Bhardwaj, Wei Di, Neel Sundaresan

We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data.

Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science.

We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems.

The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a largescale annotated data set of fashion images (Fashion-136K) that can be exploited for future research in data driven visual fashion.

WACV 2014

Furniture-Geek: Understanding Fine-Grained Furniture Attributes from Freely Associated Text and Tags

Vicente Ordonez, Vignesh Jagadeesh, Wei Di, Anurag Bhardwaj, Robinson Piramuthu

As the amount of user generated content on the internet grows, it becomes ever more important to come up with vision systems that learn directly from weakly annotated and noisy data. We leverage a large scale collection of user generated content comprising of images, tags and title/captions of furniture inventory from an e-commerce website to discover and categorize learnable visual attributes. Furniture categories have long been the quintessential example of why computer vision is hard, and we make one of the first attempts to understand them through a large scale weakly annotated dataset. We focus on a handful of furniture categories that are associated with a large number of fine-grained attributes. We propose a set of localized feature representations built on top of state-of-the-art computer vision representations originally designed for fine-grained object categorization. We report a thorough empirical characterization on the visual identifiability of various fine-grained attributes using these representations and show encouraging results on finding iconic images and on multi-attribute prediction.

Mathematics in Image Formation and Processing, July 2000

Statistical proximal point methods for image reconstruction

A.O. Hero, S. Crétien and Robinson Piramuthu
Information Theory Workshop on Detection, Estimation, Classification and Imaging, February, 1999

Theoretical limits of parametric shape estimation in resolution-limited imaging

Robinson Piramuthu and A.O. Hero
Patents
Friday, November 27, 2015

"Re-ranking item recommendations based on image feature data", eBay Inc., US8737729 B2, filed on September 28, 2012, issued on May 27, 2014.

Thursday, December 4, 2014

"Evaluating image sharpness", eBay Inc., US20140355881, filed on November 6, 2013, published on December 4, 2014.

Friday, September 5, 2014

"Extraction of image feature data from images", eBay Inc., US8798363 B2, filed on September 28, 2012, issued on August 5, 2014.

Tuesday, May 27, 2014

"Complementary item recommendations using image feature data", eBay Inc., US8737728 B2, filed on September 28, 2012, issued on May 27, 2014.

Tuesday, March 25, 2014

"System and methods for rule-based segmentation for vertical person of people with full or partial frontal view in color images", FlashFoto Inc. WO2009078957 A1, filed on December 12, 2008, issued on March 25, 2014.

Tuesday, March 25, 2014

"Rule-based segmentation for objects with frontal view in color images", Flashfoto Inc., US 8670615 B2, filed on December 12, 2008, issued on March 25, 2014.

Tuesday, March 11, 2014

"Refinement of segmentation markup", Flashfoto Inc., US8670615 B2, filed on September 30, 2010, issued on March 11, 2014.

Thursday, February 20, 2014

Recommendations based on wearable sensors", eBay Inc., US20140052567 A1, filed on July 19, 2013, published on February 20, 2014.

Tuesday, April 2, 2013

"Systems and methods for segmentation by removal of monochromatic background with limited intensity variations", FlashFoto Inc., US 12/798,917, filed on April 13, 2010, issued on April 2, 2013.

Tuesday, March 13, 2012

"System and method for unsupervised local boundary or region refinement of figure mask using over and under segmentation of regions", FlashFoto Inc., US8135216 B2, filed on December 11, 2008, issued on March 13, 2012

Thursday, December 15, 2011

"Systems and methods for retargeting an image utilizing a saliency map" FlashFoto Inc., US20110305397 A1, filed on March 08, 2011, issued on December 15, 2011.

Tuesday, January 12, 2010

"Process excursion detection", KLA-Tencor, US 7646476, filed on May 9, 2008, issued on January 12, 2010.

Thursday, June 11, 2009

"System and method for unsupervised local boundary or region refinement of figure mask using over and under segmentation of regions", FlashFoto Inc., US 20090148041, filed on December 11, 2008, published on June 11, 2009.

Tuesday, August 26, 2008

"Wafer inspection systems and methods for analyzing inspection data", KLA-Tencor, US 7417724 B1, filed on May 10, 2007, issued on August 26, 2008.

Tuesday, July 1, 2008

"Process excursion detection", KLA-Tencor, US 7394534 B1, filed on Nov. 19, 2003, issued on July 1, 2008.

Tuesday, June 5, 2007

"Wafer inspection systems and methods for analyzing inspection data", KLA-Tencor, US 7227628 B1, filed on October 12, 2004, issued on June 5, 2007.

Tuesday, February 28, 2006

"Detection of spatially repeating signatures", KLA-Tencor, US 7006886 B1, filed on January 12, 2004, issued on February 28, 2006.

Tuesday, February 28, 2006

"Spatial signature analysis", KLA-Tencor, US 7006886, filed on January 12, 2004, issued on February 28, 2006.

Tuesday, April 6, 2004

"Spatial signature analysis", KLA-Tencor, US 6718526 B1, filed on February 7, 2003, issued on April 6, 2004.

Thursday, October 16, 2014

"System and method for providing fashion recommendations", eBay Inc., US20140310304, filed on December 17, 2013, published on October 16, 2014.

Thursday, October 16, 2014

"Searchable texture index", eBay Inc., US20140310131, filed on April 15, 2013, published on October 16, 2014.

Monday, November 9, 2015

"System and method for recommending home décor items based on an image of a room", eBay Inc., US 20140289069 A1, filed on November 6, 2013, published on September 25, 2014.

Thursday, September 25, 2014

"Utilizing an intra-body area network", eBay Inc., WO2014151875 A1, filed on March 13, 2014, published on September 25, 2014.

Thursday, September 18, 2014

"Method and system to build a time sensitive profile", eBay Inc., US20140280125, filed on March 14, 2013, published on September 18, 2014.

Thursday, September 18, 2014

"Method and system to utilize an intra-body area network", eBay Inc., US20140279341 A1, filed on March 14, 2013, published on September 18, 2014.

Thursday, September 18, 2014

"System and method to fit an image of an inventory part", eBay Inc., US20140282060 A1, filed on March 14, 2014, published on September 18, 2014.

Thursday, September 18, 2014

"System and method to retrieve relevant inventory using sketch-based query", eBay Inc., US20140279265 A1, filed on March 12, 2014, published on September 18, 2014.

Thursday, May 8, 2014

"Recommendations based on wearable sensors", eBay Inc., WO2014028765 A3, filed on August 15, 2013, published on May 8, 2014.

Tuesday, January 28, 2014

"Systems and methods for segmenting human hairs and faces in color images", FlashFoto Inc. US8638993 B2, filed on April 5, 2011, issued on January 28, 2014.

Thursday, April 4, 2013

"Image feature data extraction and use", eBay Inc., WO 2013/049736, filed on September 29, 2012, published on April 04, 2013.

Thursday, April 4, 2013

"Acquisition and use of query images with image feature data", eBay Inc., US20130085893 A1, filed on September 28, 2012, published on April 04, 2013.

Friday, April 4, 2014

"Item recommendations using image feature data", eBay Inc., US20130084001 A1, filed on September 28, 2012, published on April 04, 2013.

Tuesday, February 26, 2013

"Image segmentation", Flashfoto Inc., US8385609 B2, filed on October 21, 2009, published on February 26, 2013.

Tuesday, October 30, 2012

"System and method for improving display of tuned multi-scaled regions of an image with local and global control", FlashFoto Inc. US8300936 B2, filed on April 3, 2008, published on October 30, 2012.

Tuesday, December 15, 2015

"Systems and methods for retargeting an image utilizing a saliency map" FlashFoto Inc., US 2011/0305397 A1, filed on March 08, 2011, published on December 15, 2011.

Thursday, November 10, 2011

"Systems and methods for manifold learning for matting", FlashFoto Inc., US20110274344 A1, filed on May 10, 2011, published on November 10, 2011.

Thursday, March 31, 2011

"Systems and methods for refinement of segmentation using spray-paint markup", FlashFoto Inc., US 2011/0075926 A1, filed on September 30, 2010, published on March 31, 2011.

Thursday, June 24, 2010

"Systems and methods for segmenting an image of a person to produce a mugshot", FlashFoto Inc., US 20100158325, filed on October 21, 2008, published on June 24, 2010.

Thursday, January 14, 2010

"System and method for segmentation of an image into tuned multi-scale regions", FlashFoto Inc., US 20100008576, filed on July 11, 2008, published on January 14, 2010.

Thursday, June 25, 2009

"Systems and methods for rule-based segmentation for objects with full or partial frontal view in color images", FlashFoto Inc., WO 2009078957 A1, filed on December 12, 2008, published on June 25, 2009.

Monday, November 9, 2015

Efficient Media Retrieval", eBay Inc.

Monday, November 9, 2015

"Fashion Apparel Detection", eBay Inc.

Monday, November 9, 2015

"Method for Fine-Grained Categorization", eBay Inc.

Monday, November 9, 2015

"Fast 3D Model Fitting and Anthropometrics using Synthetic Data", eBay Inc.

Monday, November 9, 2015

"Hierarchical Deep Convolutional Neural Network for Image Classification", eBay Inc.

Monday, November 9, 2015

"Discovering Visual Concepts from Weakly Labeled Image Collections", eBay Inc.

Monday, November 9, 2015

"System and Method for Logo Retrieval using Localized Spatial Color Histogram", eBay Inc.

Monday, November 9, 2015

"Mobile remote control of an interactive display or kiosk", eBay Inc.

Monday, November 9, 2015

"Model-It", eBay Inc.

Monday, November 9, 2015

"Fashion Kibitzer", eBay Inc.

Monday, November 9, 2015

"Fashion Preference Analysis", eBay Inc.

Monday, November 9, 2015

"Geometric VLAD based efficient Large Scale Image Retrieval & System", eBay Inc.

Monday, November 9, 2015

"System and Method for Scene Text Recognition in the Wild", eBay Inc.

Monday, November 9, 2015

"System and Method for Estimating depth from a single image for Shipping and 3D Image Browsing", eBay Inc.

Monday, November 9, 2015

"Automatic Image Sharpness Diagnosis and Selling Assistant for Online e-Commerce", eBay Inc.

Monday, November 9, 2015

"Correlating image annotations with foreground features", eBay Inc.

Monday, November 9, 2015

"Image sharpness evaluation and uses", eBay Inc.

Monday, November 9, 2015

"System and Method to Visualize Fitment of Wheels and Hubcaps on Automobiles", eBay Inc.

Monday, November 9, 2015

"Methods and systems to efficiently retrieve images", eBay Inc.

Monday, November 9, 2015

"System and method for providing fashion recommendations", eBay Inc.

Monday, November 9, 2015

"System and method for recommending home decor items based on an image of a room", eBay Inc.

Monday, November 9, 2015

"System and method for scene text recognition in the wild", eBay Inc.

Monday, November 9, 2015

"System and method to understand items for home decor from freely associated text and tags", eBay Inc.