GenerativeModel
@available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *) public final class GenerativeModel : SendableA type that represents a remote multimodal model (like Gemini), with the ability to generate content based on various input types.
-
Generates content from String and/or image inputs, given to the model as a prompt, that are representable as one or more
Parts.Since
Parts do not specify a role, this method is intended for generating content from zero-shot or “direct” prompts. For few-shot prompts, seegenerateContent(_ content: [ModelContent]).Throws
AGenerateContentErrorif the request failed.Declaration
Swift
public func generateContent(_ parts: any PartsRepresentable...) async throws -> GenerateContentResponseParameters
partsThe input(s) given to the model as a prompt (see
PartsRepresentablefor conforming types).Return Value
The content generated by the model.
-
Generates new content from input content given to the model as a prompt.
Throws
AGenerateContentErrorif the request failed.Declaration
Swift
public func generateContent(_ content: [ModelContent]) async throws -> GenerateContentResponseParameters
contentThe input(s) given to the model as a prompt.
Return Value
The generated content response from the model.
-
Generates content from String and/or image inputs, given to the model as a prompt, that are representable as one or more
Parts.Since
Parts do not specify a role, this method is intended for generating content from zero-shot or “direct” prompts. For few-shot prompts, seegenerateContentStream(_ content: @autoclosure () throws -> [ModelContent]).Declaration
Swift
@available(macOS 12.0, *) public func generateContentStream(_ parts: any PartsRepresentable...) throws -> AsyncThrowingStream<GenerateContentResponse, Error>Parameters
partsThe input(s) given to the model as a prompt (see
PartsRepresentablefor conforming types).Return Value
A stream wrapping content generated by the model or a
GenerateContentErrorerror if an error occurred. -
Generates new content from input content given to the model as a prompt.
Declaration
Swift
@available(macOS 12.0, *) public func generateContentStream(_ content: [ModelContent]) throws -> AsyncThrowingStream<GenerateContentResponse, Error>Parameters
contentThe input(s) given to the model as a prompt.
Return Value
A stream wrapping content generated by the model or a
GenerateContentErrorerror if an error occurred. -
Creates a new chat conversation using this model with the provided history.
Declaration
Swift
public func startChat(history: [ModelContent] = []) -> Chat -
Runs the model’s tokenizer on String and/or image inputs that are representable as one or more
Parts.Since
Parts do not specify a role, this method is intended for tokenizing zero-shot or “direct” prompts. For few-shot input, seecountTokens(_ content: @autoclosure () throws -> [ModelContent]).Declaration
Swift
public func countTokens(_ parts: any PartsRepresentable...) async throws -> CountTokensResponseParameters
partsThe input(s) given to the model as a prompt (see
PartsRepresentablefor conforming types).Return Value
The results of running the model’s tokenizer on the input; contains
totalTokens. -
Runs the model’s tokenizer on the input content and returns the token count.
Declaration
Swift
public func countTokens(_ content: [ModelContent]) async throws -> CountTokensResponseParameters
contentThe input given to the model as a prompt.
Return Value
The results of running the model’s tokenizer on the input; contains
totalTokens.