• May 1, 2024

Evaluation

Information Technology Track

Entrepreneur Track

 

Category 1. People Impact. The potential impact of the project on resiliency. Category scoring (1-5):

  • The functionality provides little to no benefit to the “end user” and it has a major negative impact on resiliency.
  • The functionality provides little to no benefit to the “end user” but it has no negative impact on resiliency.
  • The functionality provides significant benefit to the “end user” but it has a major negative impact on resiliency.
  • The functionality provides significant benefit to the “end user” and it has little to no impact on resiliency.
  • The functionality provides significant benefit to the “end user” and it has a major positive impact on resiliency.

Category 2 – Ambition. Scale and complexity of the solution being proposed. Category scoring (1-5):

  • There are already existing solutions for this problem that are identical or very similar.
  • The new code provides a better/faster/clearer way to attack the problem than existing solutions.
  • The new code adds functionality beyond that provided by the old code.
  • The new project tackles a problem that has been overlooked or ignored in the past, or attacks a problem with a new angle / on a bigger scale / on a higher level.
  • The new project attacks an entirely new problem, and provides a good solution.

Category 3: Innovation of Solution. Scale and novelty of the technology being used, and/or the development approach taken. Category scoring (1-5):

  • The chosen technology and design is already deeply established and readily available.
  • The code adds a new twist on established design.
  • The project adds a major departure from established design.
  • The project makes a profound break from established design.
  • The technology or design breaks new ground in the computer science/programming industry at large.

Category 4: Quality of Implementation. Ability for the team to reach a conclusion about the viability of the project. Category scoring (1, 3, 5):

  • The team was not able to offer a conclusion.
  • The team offered a definitive conclusion with no reason, or evidence backing it.
  • The team offered a definitive conclusion with a well thought out reason, or evidence backing it.

Category 5: Quality of Presentation.  Ability for the judges to clearly understand (a) what the desired functionality is, and (2) see that the functionality is behaving as expected. Category scoring (1-5):

  • The visualizations obscured the functionality, and the desired functionality was unclear.
  • The visualizations obscured the functionality, and the desired functionality was poorly explained.
  • The visualizations were difficult to understand and the functionality was poorly
  • The visualizations were clear but the functionality was poorly described/explained.
  • The visualizations clearly showed the functionality working as described.

Business Track

Category 1. People Impact. The potential impact of the project on resiliency. Category scoring (1-5):

  • The functionality provides little to no benefit to the “end user” and it has a major negative impact on resiliency.
  • The functionality provides little to no benefit to the “end user” but it has no negative impact on resiliency.
  • The functionality provides significant benefit to the “end user” but it has a major negative impact on resiliency.
  • The functionality provides significant benefit to the “end user” and it has little to no impact on resiliency.
  • The functionality provides significant benefit to the “end user” and it has a major positive impact on resiliency.

Category 2: Business value.  The relevancy of the solution to the use case and the potential marketability of the project. 

  • The proposed solution is not relevant to the use case and it is not marketable.
  • The proposed solution is a bit relevant but it is not marketable. 
  • The proposed solution is relevant but it is not marketable.
  • The proposed solution is highly relevant and there may be a market for it.
  • The proposed solution is highly relevant and there is a definite market need for it. 

Category 3. Coolness. “Coolness” is an attempt to measure the important “wow” factor: the impression the hackathon entry makes when you see it. The proposed solution to the use case reflects ingenuity and is something a human being would feel delighted, excited, empowered, or even relieved to use.

  • The proposed solution does not deliver a wow factor for the end user. 
  • The proposed solution delivers a mild wow factor but its effect on the end user is undetermined.
  • The proposed solution delivers a moderate wow factor and the end user would likely feel ok about using it.
  • The proposed solution delivers on the wow factor but lacks ingenuity for the end user.
  • The proposed solution is not only very cool, but also ingenious. The end user would feel delighted, excited, empowered and/or relieved to use it. 

Level of innovation: Are the participants aware of other attempts to solve the same problem? Does the entry attempt to do something completely novel – new and unique?

Category 4: Quality of Presentation.  Ability for the judges to clearly understand (a) what the desired functionality is, and (b) see that the functionality is behaving as expected. Category scoring (1-5):

  • The visualizations obscured the functionality, and the desired functionality was unclear.
  • The visualizations obscured the functionality, and the desired functionality was poorly explained.
  • The visualizations were difficult to understand and the functionality was poorly
  • The visualizations were clear but the functionality was poorly described/explained.
  • The visualizations clearly showed the functionality working as described.

Communication Track

1. Identify a real-world issue or problem (.5 page)

  • What is it?
  • Why is it important? (e.g. impact, conflict, emergency, current)?
  • Who does it affect?

2. Explain what kind of data (structured vs unstructured) you would need to gather to understand, analyze, and/or solve the real-world issue or problem? (1.5 page)

  • Visualize an attribute map that lists the kind of data that would help illustrate the issue or problem.
  • Identify at least one relevant dataset (Kaggle, Data.gov, Twitter, etc.) and explain what kind of analysis (regression, classification, or/and cluster) that you’d do to analyze the problem.
  • Provide a basic visualization of the problem at focus using the available information from the dataset(s). (Optional: Bonus Point)

3. Explain what kind of Machine Learning models (supervised vs unsupervised) that you’d recommend solving the issue or problem (.5 page)

  • Will you recommend supervised or unsupervised machine learning models? Why?

4. Fully explain your proposed solution:(1.5 pages)

  • Describe how the solution works (functions)
  • Describe what makes your solution unique (characteristics and innovations)
  • A visual representation of your project and the desired products/outcomes

5. Explain the social and cultural Implications of the solutions (1 page)

  • Who will be the end-users of the solutions?
  • Describe the benefits of the solution to the end-users.
  • What are the social and cultural impacts of the solutions to other stakeholders, such as to policymakers, voters, public members, investors, etc.