Search Results - "Maniparambil, Mayug"

  • Showing 1 - 13 results of 13
Refine Results
  1. 1
  2. 2

    Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts by Maniparambil, Mayug, Vorster, Chris, Molloy, Derek, Murphy, Noel, McGuinness, Kevin, O'Connor, Noel E.

    “…Contrastive pretrained large Vision-Language Models (VLMs) like CLIP have revolutionized visual representation learning by providing good performance on…”
    Get full text
    Conference Proceeding
  3. 3

    BaseTransformers: Attention over base data-points for One Shot Learning by Maniparambil, Mayug, McGuinness, Kevin, O'Connor, Noel

    Published 05-10-2022
    “…Few shot classification aims to learn to recognize novel categories using only limited samples per category. Most current few shot methods use a base dataset…”
    Get full text
    Journal Article
  4. 4

    An Ensemble Deep Learning Approach for COVID-19 Severity Prediction Using Chest CT Scans by Aleem, Sidra, Maniparambil, Mayug, Little, Suzanne, O'Connor, Noel, McGuinness, Kevin

    Published 17-05-2023
    “…Chest X-rays have been widely used for COVID-19 screening; however, 3D computed tomography (CT) is a more effective modality. We present our findings on…”
    Get full text
    Journal Article
  5. 5

    From Unimodal to Multimodal: Scaling up Projectors to Align Modalities by Maniparambil, Mayug, Akshulakov, Raiymbek, Djilali, Yasser Abdelaziz Dahou, Narayan, Sanath, Singh, Ankit, O'Connor, Noel E

    Published 28-09-2024
    “…Recent contrastive multimodal vision-language models like CLIP have demonstrated robust open-world semantic understanding, becoming the standard image…”
    Get full text
    Journal Article
  6. 6

    Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero-shot Medical Image Segmentation by Aleem, Sidra, Wang, Fangyijie, Maniparambil, Mayug, Arazo, Eric, Dietlmeier, Julia, Curran, Kathleen, O'Connor, Noel E., Little, Suzanne

    “…The Segment Anything Model (SAM) and CLIP are remarkable vision foundation models (VFMs). SAM, a prompt-driven segmentation model, excels in segmentation tasks…”
    Get full text
    Conference Proceeding
  7. 7

    Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts by Maniparambil, Mayug, Vorster, Chris, Molloy, Derek, Murphy, Noel, McGuinness, Kevin, O'Connor, Noel E

    Published 21-07-2023
    “…Contrastive pretrained large Vision-Language Models (VLMs) like CLIP have revolutionized visual representation learning by providing good performance on…”
    Get full text
    Journal Article
  8. 8

    Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation by Aleem, Sidra, Wang, Fangyijie, Maniparambil, Mayug, Arazo, Eric, Dietlmeier, Julia, Silvestre, Guenole, Curran, Kathleen, O'Connor, Noel E, Little, Suzanne

    Published 09-04-2024
    “…The Segment Anything Model (SAM) and CLIP are remarkable vision foundation models (VFMs). SAM, a prompt driven segmentation model, excels in segmentation tasks…”
    Get full text
    Journal Article
  9. 9

    Do Vision and Language Encoders Represent the World Similarly? by Maniparambil, Mayug, Akshulakov, Raiymbek, Djilali, Yasser Abdelaziz Dahou, Narayan, Sanath, Seddik, Mohamed El Amine, Mangalam, Karttikeya, O'Connor, Noel E

    Published 10-01-2024
    “…Aligned text-image encoders such as CLIP have become the de facto model for vision-language tasks. Furthermore, modality-specific encoders achieve impressive…”
    Get full text
    Journal Article
  10. 10

    Do Vision and Language Encoders Represent the World Similarly? by Maniparambil, Mayug, Akshulakov, Raiymbek, Dahou Djilali, Yasser Abdelaziz, Seddik, Mohamed EI Amine, Narayan, Sanath, Mangalam, Karttikeya, O'Connor, Noel E.

    “…Aligned text-image encoders such as CLIP have become the de-facto model for vision-language tasks. Further-more, modality-specific encoders achieve impressive…”
    Get full text
    Conference Proceeding
  11. 11

    Phase retrieval for Fourier Ptychography under varying amount of measurements by Boominathan, Lokesh, Maniparambil, Mayug, Gupta, Honey, Baburajan, Rahul, Mitra, Kaushik

    Published 09-05-2018
    “…Fourier Ptychography is a recently proposed imaging technique that yields high-resolution images by computationally transcending the diffraction blur of an…”
    Get full text
    Journal Article
  12. 12
  13. 13

    IITMSAT Communications System : A LeanSat Design Approach by Gulati, Akshay, Chavan, Shubham, Samuel, Joseph, Srinivasan, Sampoornam, Shekhar, Pradeep, Dave, Akshat, Sant, Aditya, Bhadane, Sourbh, Maniparambil, Mayug, Sivasankarakurup, Vishnu Prasad, Durairaj, Dhanalakshmi, Koilpillai, David, Ramachandran, Harishankar

    Published 03-11-2017
    “…IITMSAT is a student-built nano satellite mission of Indian Institute of Technology Madras, Chennai, India. The objective is to study the precipitation of high…”
    Get full text
    Journal Article