執筆時点で全部のモデルに対応しているわけでは無いそうで
今週、上級モデルならGPT-4o越えだと話題になったLlama 3.1を導入して遊んでみようと思います
実際のところ
llama3.1の導入(してない場合)
ollama pull llama3.1
スクリプト
import json import ollama import asyncio # Simulates an API call to get flight times # In a real application, this would fetch data from a live database or API def get_flight_times(departure: str, arrival: str) -> str: flights = { 'NYC-LAX': {'departure': '08:00 AM', 'arrival': '11:30 AM', 'duration': '5h 30m'}, 'LAX-NYC': {'departure': '02:00 PM', 'arrival': '10:30 PM', 'duration': '5h 30m'}, 'LHR-JFK': {'departure': '10:00 AM', 'arrival': '01:00 PM', 'duration': '8h 00m'}, 'JFK-LHR': {'departure': '09:00 PM', 'arrival': '09:00 AM', 'duration': '7h 00m'}, 'CDG-DXB': {'departure': '11:00 AM', 'arrival': '08:00 PM', 'duration': '6h 00m'}, 'DXB-CDG': {'departure': '03:00 AM', 'arrival': '07:30 AM', 'duration': '7h 30m'}, } key = f'{departure}-{arrival}'.upper() return json.dumps(flights.get(key, {'error': 'Flight not found'})) async def run(model: str): client = ollama.AsyncClient() # Initialize conversation with a user query messages = [{'role': 'user', 'content': 'What is the flight time from New York (NYC) to Los Angeles (LAX)?'}] # First API call: Send the query and function description to the model response = await client.chat( model=model, messages=messages, tools=[ { 'type': 'function', 'function': { 'name': 'get_flight_times', 'description': 'Get the flight times between two cities', 'parameters': { 'type': 'object', 'properties': { 'departure': { 'type': 'string', 'description': 'The departure city (airport code)', }, 'arrival': { 'type': 'string', 'description': 'The arrival city (airport code)', }, }, 'required': ['departure', 'arrival'], }, }, }, ], ) # Add the model's response to the conversation history messages.append(response['message']) # Check if the model decided to use the provided function if not response['message'].get('tool_calls'): print("The model didn't use the function. Its response was:") print(response['message']['content']) return # Process function calls made by the model if response['message'].get('tool_calls'): available_functions = { 'get_flight_times': get_flight_times, } for tool in response['message']['tool_calls']: function_to_call = available_functions[tool['function']['name']] function_response = function_to_call(tool['function']['arguments']['departure'], tool['function']['arguments']['arrival']) # Add function response to the conversation messages.append( { 'role': 'tool', 'content': function_response, } ) # Second API call: Get final response from the model final_response = await client.chat(model=model, messages=messages) print(final_response['message']['content']) # Run the async function asyncio.run(run('llama3.1'))
The model didn't use the function. Its response was:
The flight time from New York's John F. Kennedy International Airport (JFK) or LaGuardia Airport (LGA) to Los Angeles International Airport (LAX) depends on several factors, such as the airline, flight route, and weather conditions.Here are some approximate flight times:
Non-stop flights:
+ From JFK to LAX: 5 hours and 30 minutes
+ From LGA to LAX: 5 hours and 45 minutesWith one stop:
+ Flight duration can range from 7-10 hours, depending on the layover time
Please note that these times are approximate and may vary depending on the airline, flight schedule, and other factors. I recommend checking with your preferred airline or a flight search engine like Google Flights or Skyscanner for the most up-to-date and accurate flight information.