Selcuk KopruĀ is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

Abstract:

A multimodal embedding modifier generates a modified seed search selection embedding for providing a set of search results. The multimodal embedding modifier enhances the ability and accuracy of identifying a user's true intent when searching the online marketplace. For example, embodiments disclosed herein can allow a user to navigate multiple modalities for an item. In some embodiments, a user may select a search result corresponding to an initial search query, and further modify the selected search result by inputting a modifier (e.g., a textual modifier). The multimodal embedding modifier can be trained using a training dataset including a text embedding, an image embedding, another type of embedding, or a combination thereof.

Country: United States
Grant Date: October 29, 2024

Abstract:

Various methods and systems for providing indications of inconsistent attributes of item listings associated in item listing videos. An item listing video ? of an item listing ? is accessed. The item listing video is accessed via an item listing interface of an item listing system. Extracted item features ? via a machine learning engine ? of an item from the item listing video, are accessed. The extracted item features are extracted based on listing-interface item features associated with listing the item. The extracted item features of the item are compared to the listing-interface item features of the item. Based on comparing the listing-interface item features to the extracted, an inconsistent attribute ? between an extracted item feature and a listing-interface item feature that is associated with listing the item ? is identified. An indication of an inconsistent attribute is communicated to cause display of the indication of the inconsistent attribute at the item listing interface.

Country: United States
Grant Date: October 22, 2024
INVENTORS: Selcuk Kopru, Ellis Luk

Abstract:

Different action user-interface components in a comparison view are described. Initially, a selection is received to display a comparison view via a user interface of a listing platform. Multiple listings of the listing platform are selected for inclusion in the comparison view. A comparison view system determines which action of a plurality of actions, used by the listing platform, to associate with each of the listings. A display device displays the multiple listings concurrently in a comparison view via a user interface of the listing platform and also displays an action user-interface component (e.g., a button) in each of the plurality of listings. The action user-interface component is selectable to initiate the action associated with the respective listing. In accordance with the described techniques, the action user-interface component displayed in at least two of the multiple listings is selectable to initiate different actions in relation to the respective listing.

Country: China
Grant Date: October 18, 2024
INVENTORS: Lakshimi Duraivenkatesh, Selcuk Kopru, Tomer Lancewicki, Ramesh Periyathambi, Sai Siripurapu

Abstract:

Different action user-interface components in a comparison view are described. Initially, a selection is received to display a comparison view via a user interface of a listing platform. Multiple listings of the listing platform are selected for inclusion in the comparison view. A comparison view system determines which action of a plurality of actions, used by the listing platform, to associate with each of the listings. A display device displays the multiple listings concurrently in a comparison view via a user interface of the listing platform and also displays an action user-interface component (e.g., a button) in each of the plurality of listings. The action user-interface component is selectable to initiate the action associated with the respective listing. In accordance with the described techniques, the action user-interface component displayed in at least two of the multiple listings is selectable to initiate different actions in relation to the respective listing.

Country: United States
Grant Date: October 8, 2024
INVENTORS: Lakshimi Duraivenkatesh, Selcuk Kopru, Tomer Lancewicki, Ramesh Periyathambi, Sai Siripurapu

Abstract:

Methods for determining which image of a set of images to present in a search results page for a product are described. Components of a server system may receive a set of images for a set of items associated with a product. Components of the server system may perform image ranking to rank the set of images to identify a representative image of the set of images for the product, based on a user interaction metric of each image of the set of images. The components of the server system may then receive, from a user device, a search query that may be mapped to the product, and the component of the server system may transmit, to the user device, the search results page that includes at least one item of the set of items and the representative image based on the interaction metric of the representative image.

Country: China
Grant Date: September 24, 2024

Abstract:

Systems and methods for processing webpage calls via multiple module responses are described. A system may receive, from a client device, a first call for module data associated with a set of webpage modules for presentation in a webpage. The system may subsequently transmit, to the client device based on receiving the first call, a first response including first module data associated with a first subset of the set of webpage modules. The first response may additionally include a token identifying the webpage. The server may additionally transmit, to the client device based on transmitting the first response, a second response including the token identifying the webpage and second module data associated with a second subset of the set of webpage modules that differs from the first subset of the set of webpage modules.

Country: United States
Grant Date: July 16, 2024
INVENTORS: Vineet Bindal, Lakshimi Duraivenkatesh, Selcuk Kopru, Tomer Lancewicki, Nagasita Raghuram Nimishakavi Venkata, Ramesh Periyathambi

Abstract:

A video is provided to viewers using a web-based platform without restricted audio, such as a copyrighted soundtrack. To do so, a video comprising at least two audio layers is received. The audio layers can include separate and distinct audio layers or a mix of audio from separate sources. A restricted audio element is identified in a first audio layer and a speech element is identified in a second audio layer. A stitched text string can be generated by performing speech-to-text on both audio layers and removing the text corresponding to the restricted audio element of the second audio layer. When playing back the video, a portion of the video is muted based on the restricted audio element. A voice synthesizer is employed to generate audible sound during the muted portion using the stitched text string.

Country: China
Grant Date: June 4, 2024
INVENTORS: Tony Haro, Selcuk Kopru, Ellis Luk

Abstract:

A video is provided to viewers using a web-based platform without restricted audio, such as a copyrighted soundtrack. To do so, a video comprising at least two audio layers is received. The audio layers can include separate and distinct audio layers or a mix of audio from separate sources. A restricted audio element is identified in a first audio layer and a speech element is identified in a second audio layer. A stitched text string can be generated by performing speech-to-text on both audio layers and removing the text corresponding to the restricted audio element of the second audio layer. When playing back the video, a portion of the video is muted based on the restricted audio element. A voice synthesizer is employed to generate audible sound during the muted portion using the stitched text string.

Country: Germany
Grant Date: April 10, 2024
INVENTORS: Tony Haro, Selcuk Kopru, Ellis Luk

Abstract:

A video is provided to viewers using a web-based platform without restricted audio, such as a copyrighted soundtrack. To do so, a video comprising at least two audio layers is received. The audio layers can include separate and distinct audio layers or a mix of audio from separate sources. A restricted audio element is identified in a first audio layer and a speech element is identified in a second audio layer. A stitched text string can be generated by performing speech-to-text on both audio layers and removing the text corresponding to the restricted audio element of the second audio layer. When playing back the video, a portion of the video is muted based on the restricted audio element. A voice synthesizer is employed to generate audible sound during the muted portion using the stitched text string.

Country: United Kingdom
Grant Date: April 10, 2024
INVENTORS: Tony Haro, Selcuk Kopru, Ellis Luk

Abstract:

Methods for determining which image of a set of images to present in a search results page for a product are described. Components of a server system may receive a set of images for a set of items associated with a product. Components of the server system may perform image ranking to rank the set of images to identify a representative image of the set of images for the product, based on a user interaction metric of each image of the set of images. The components of the server system may then receive, from a user device, a search query that may be mapped to the product, and the component of the server system may transmit, to the user device, the search results page that includes at least one item of the set of items and the representative image based on the interaction metric of the representative image.

Country: United States
Grant Date: October 31, 2023

Abstract:

Systems and methods for processing webpage calls via multiple module responses are described. A system may receive, from a client device, a first call for module data associated with a set of webpage modules for presentation in a webpage. The system may subsequently transmit, to the client device based on receiving the first call, a first response including first module data associated with a first subset of the set of webpage modules. The first response may additionally include a token identifying the webpage. The server may additionally transmit, to the client device based on transmitting the first response, a second response including the token identifying the webpage and second module data associated with a second subset of the set of webpage modules that differs from the first subset of the set of webpage modules.

Country: United States
Grant Date: October 3, 2023
INVENTORS: Vineet Bindal, Lakshimi Duraivenkatesh, Selcuk Kopru, Tomer Lancewicki, Nagasita Raghuram Nimishakavi Venkata, Ramesh Periyathambi

Abstract:

Process flow graphs are generated from system trace data by obtaining raw distributed trace data for a system, aggregating the raw distributed trace data into aggregated distributed trace data, generating a plurality of process flow graphs from the aggregated distributed trace data, and storing the plurality of process flow graphs in a graphical store. A first critical path can be determined from the plurality of process flow graphs based on an infrastructure design for the system and a process flow graph corresponding to the first critical path provided for graphical display. Certain examples can determine a second critical path involving a selected element of the first critical path and provide the process flow graph for the second critical path for display. Some examples pre-process the aggregated distributed trace data to repair incorrect traces. Other examples merge included process flow graphs into longer graphs.

Country: United States
Grant Date: September 26, 2023

Abstract:

A web-based item listing platform provides item listings that users can create or search. Item listings can be generated using structured information extracted while capturing an item listing video of the item. During creation of the item listing video, input prompts are provided to the user that cause a mobile device to provide an input request, such as taking an image of a specific feature of the item or providing some other item description information. During the item listing video, image recognition models may also be employed to determine other item description information, such as the color, the brand, and the like. The item listing can be generated from the item listing video by populating a set of structured data elements associated with an item description type. Each structured data element is populated with the item description information corresponding to the associated item description type.

Country: United States
Grant Date: August 22, 2023
INVENTORS: Tony Haro, Selcuk Kopru, Ellis Luk, Ashok Ramani, Vikas Singh, Valeri Yee

Abstract:

A video is provided to viewers using a web-based platform without restricted audio, such as a copyrighted soundtrack. To do so, a video comprising at least two audio layers is received. The audio layers can include separate and distinct audio layers or a mix of audio from separate sources. A restricted audio element is identified in a first audio layer and a speech element is identified in a second audio layer. A stitched text string can be generated by performing speech-to-text on both audio layers and removing the text corresponding to the restricted audio element of the second audio layer. When playing back the video, a portion of the video is muted based on the restricted audio element. A voice synthesizer is employed to generate audible sound during the muted portion using the stitched text string.

Country: United States
Grant Date: February 21, 2023
INVENTORS: Tony Haro, Selcuk Kopru, Ellis Luk

Abstract:

According to various embodiments, the Query Context Translation Engine identifies a topic of a search query history received during a current user session. The search query history in a first language. The Query Context Translation Engine identifies, in a translation table, target text that corresponds with a query in the search query history, the target text comprising at least one word. The Query Context Translation Engine obtains at least one search result based on a translation of the target text in a second language.

Country: United States
Grant Date: January 24, 2023
INVENTORS: Sanjika Hewavitharana, Selcuk Kopru, Hassan Sawaf

Abstract:

Techniques are disclosed for automatically adjusting machine learning parameters in an e-commerce system. Hyperparameters of a machine learning component are tuned using a gradient estimator and a first training set representative of an e-commerce context. The machine learning component is trained using the tuned hyperparameters and the first training set. The hyperparameters are automatically re-tuned using the gradient estimator and a second training set representative of a changed e-commerce context. The machine learning component is re-trained using the re-tuned hyperparameters and the second training set.

Country: United States
Grant Date: December 6, 2022
INVENTORS: Selcuk Kopru, Tomer Lancewicki

Abstract:

A web browser extension identifies graphic objects from images or video being presented by a web browser. Webpages related to the graphic objects are identified. Web links that facilitate navigation to the webpages are embedded over an area of the image corresponding to the identified graphic image. Where the graphic objects are identified within video, the web links are progressively embedded within graphic object boundaries of the graphic object as the graphic objects move locations during progression of the video. In this way, a user is able to interact with graphic objects of images and video to navigate to webpages related to the graphic objects. Some implementations provide a webpage redirect command at a stop point of the video so that the user can interact with graphic objects while the video is playing and without interrupting the video.

Country: United States
Grant Date: November 29, 2022
INVENTORS: Selcuk Kopru, Ellis Luk, Valeri Yee

Abstract:

Different action user-interface components in a comparison view are described. Initially, a selection is received to display a comparison view via a user interface of a listing platform. Multiple listings of the listing platform are selected for inclusion in the comparison view. A comparison view system determines which action of a plurality of actions, used by the listing platform, to associate with each of the listings. A display device displays the multiple listings concurrently in a comparison view via a user interface of the listing platform and also displays an action user-interface component (e.g., a button) in each of the plurality of listings. The action user-interface component is selectable to initiate the action associated with the respective listing. In accordance with the described techniques, the action user-interface component displayed in at least two of the multiple listings is selectable to initiate different actions in relation to the respective listing.

Country: United States
Grant Date: September 6, 2022
INVENTORS: Lakshimi Duraivenkatesh, Selcuk Kopru, Tomer Lancewicki, Ramesh Periyathambi, Sai Siripurapu

Abstract:

Techniques for prefetching operation cost based digital content and digital content with emphasis that overcome the challenges of conventional systems are described. In one example, a computing device may receive digital content representations of digital content from a service provider system, which are displayed on a user interface of the computing device. Thereafter, the computing device may also receive digital content as prefetches having a changed display characteristic as emphasizing a portion of the digital content based on a model trained using machine learning. Alternatively, the computing device may receive digital content as a prefetch based on a model trained using machine learning in which the model addresses a likelihood of conversion of a good or service and an operation cost of providing the digital content. Upon receiving a user input selecting one of the digital content representations, digital content is rendered in the user interface of the computing device.

Country: United States
Grant Date: May 3, 2022

Abstract:

Different action user-interface components in a comparison view are described. Initially, a selection is received to display a comparison view via a user interface of a listing platform. Multiple listings of the listing platform are selected for inclusion in the comparison view. A comparison view system determines which action of a plurality of actions, used by the listing platform, to associate with each of the listings. A display device displays the multiple listings concurrently in a comparison view via a user interface of the listing platform and also displays an action user-interface component (e.g., a button) in each of the plurality of listings. The action user-interface component is selectable to initiate the action associated with the respective listing. In accordance with the described techniques, the action user-interface component displayed in at least two of the multiple listings is selectable to initiate different actions in relation to the respective listing.

Country: Republic of Korea
Grant Date: April 6, 2022
INVENTORS: Lakshimi Duraivenkatesh, Selcuk Kopru, Tomer Lancewicki, Ramesh Periyathambi, Sai Siripurapu

Abstract:

A web browser extension identifies graphic objects from images or video being presented by a web browser. Webpages related to the graphic objects are identified. Web links that facilitate navigation to the webpages are embedded over an area of the image corresponding to the identified graphic image. Where the graphic objects are identified within video, the web links are progressively embedded within graphic object boundaries of the graphic object as the graphic objects move locations during progression of the video. In this way, a user is able to interact with graphic objects of images and video to navigate to webpages related to the graphic objects. Some implementations provide a webpage redirect command at a stop point of the video so that the user can interact with graphic objects while the video is playing and without interrupting the video.

Country: United States
Grant Date: March 1, 2022
INVENTORS: Selcuk Kopru, Ellis Luk, Valeri Yee

Abstract:

Systems and methods for processing webpage calls via multiple module responses are described. A system may receive, from a client device, a first call for module data associated with a set of webpage modules for presentation in a webpage. The system may subsequently transmit, to the client device based on receiving the first call, a first response including first module data associated with a first subset of the set of webpage modules. The first response may additionally include a token identifying the webpage. The server may additionally transmit, to the client device based on transmitting the first response, a second response including the token identifying the webpage and second module data associated with a second subset of the set of webpage modules that differs from the first subset of the set of webpage modules.

Country: United States
Grant Date: January 11, 2022
INVENTORS: Vineet Bindal, Lakshimi Duraivenkatesh, Selcuk Kopru, Tomer Lancewicki, Nagasita Raghuram Nimishakavi Venkata, Ramesh Periyathambi

Abstract:

According to various embodiments, the Query Context Translation Engine identifies a topic of a search query history received during a current user session. The search query history in a first language. The Query Context Translation Engine identifies, in a translation table, target text that corresponds with a query in the search query history, the target text comprising at least one word. The Query Context Translation Engine obtains at least one search result based on a translation of the target text in a second language.

Country: United States
Grant Date: January 21, 2020
INVENTORS: Sanjika Hewavitharana, Selcuk Kopru, Hassan Sawaf

Abstract:

In various example embodiments, a system and method for a Target Language Engine are presented. The Target Language Engine augments a synonym list in a base dictionary of a target language with one or more historical search queries previously submitted to search one or more listings in listing data. The Target Language Engine identifies a compound word and a plurality of words present in the listing data that have a common meaning in the target language. Each word from the plurality of words is present in the compound word. The Target Language Engine causes a database to create an associative link between the portion of text and a word selected from at least one of the synonym list or the plurality of words.

Country: United States
Grant Date: December 31, 2019
INVENTORS: Justin House, Chandra Khatri, Selcuk Kopru, Nish Parikh, Sameep Solanki

Selcuk Kopru