With the rapid development and adoption of generative AI in various industries, there is a need for organizations to better understand both the business value and associated risks of these technologies. Recognizing this, prominent researchers (Bommasani et al., 2023a; 2023b) have attempted to quantify and assess components of generative AI models to determine whether they align with measurable standards of responsible AI. While beneficial, the criteria in these studies are arguably too narrow in scope to produce meaningful recommended action plans for users or developers at a comprehensive level, or ensure that systems are responsible from a global standpoint. To fully harness AI in a responsible manner, a broad approach accompanied by a scientific method of measuring AI systems at scale is needed. In response to these clear needs, and as a complimentary effort to prior research efforts, Vero AI™ developed the VIOLET Impact Model™. The central aims of this paper were to take an exploratory approach to testing 10 popular generative AI models, quantify the responsible use of popular generative AI model documentation, and demonstrate the utility of taking Vero AI’s VIOLET Impact Model as an approach to measuring responsible AI system effort within these rapidly developing and intricate technologies. No single generative AI model achieved a perfect score. We found that, on average, most of the models we evaluated did well providing documentation pertaining to model Effectiveness. However, there was room for improvement regarding their documentation pertaining to Optimization. More specific scores on VIOLET Components and Themes are shown in the report. In sum, in our evaluation of the generative AI models using the VIOLET Impact Model demonstrated utility by pinpointing valuable strategic insights into the responsibility of these systems.