Title: | Automated R Instructor |
---|---|
Description: | Create videos from 'R Markdown' documents, or images and audio files. These images can come from image files or HTML slides, and the audio files can be provided by the user or computer voice narration can be created using 'Amazon Polly'. The purpose of this package is to allow users to create accessible, translatable, and reproducible lecture videos. See <https://aws.amazon.com/polly/> for more information. |
Authors: | Sean Kross [aut, cre], John Muschelli [ctb] |
Maintainer: | Sean Kross <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.4.1 |
Built: | 2024-11-01 03:08:56 UTC |
Source: | https://github.com/jhudsl/ari |
Burn Subtitles into a video
ari_burn_subtitles(video, srt, verbose = FALSE)
ari_burn_subtitles(video, srt, verbose = FALSE)
video |
Video in |
srt |
Subtitle file in |
verbose |
print diagnostic messages. If > 1, then more are printed |
Name of output video
This needs ffmpeg
that was compiled with
--enable-libass
as per
https://trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo
This function allows you to quickly access files that are used in the ari documentation.
ari_example(path = NULL)
ari_example(path = NULL)
path |
The name of the file. If no argument is provided then all of the example files will be listed. |
A character string
ari_example("ari_intro.Rmd")
ari_example("ari_intro.Rmd")
ari_narrate
creates a video from a script written in markdown and HTML
slides created with rmarkdown
or a similar package.
This function uses Amazon Polly
via ari_spin
.
ari_narrate( script, slides, output = tempfile(fileext = ".mp4"), voice = text2speech::tts_default_voice(service = service), service = "amazon", capture_method = c("vectorized", "iterative"), subtitles = FALSE, ..., verbose = FALSE, audio_codec = get_audio_codec(), video_codec = get_video_codec(), cleanup = TRUE )
ari_narrate( script, slides, output = tempfile(fileext = ".mp4"), voice = text2speech::tts_default_voice(service = service), service = "amazon", capture_method = c("vectorized", "iterative"), subtitles = FALSE, ..., verbose = FALSE, audio_codec = get_audio_codec(), video_codec = get_video_codec(), cleanup = TRUE )
script |
Either a markdown file where every paragraph will be read over
a corresponding slide, or an |
slides |
A path or URL for an HTML slideshow created with
|
output |
The path to the video file which will be created. |
voice |
The voice you want to use. See
|
service |
speech synthesis service to use,
passed to |
capture_method |
Either |
subtitles |
Should a |
... |
Arguments that will be passed to |
verbose |
print diagnostic messages. If > 1, then more are printed |
audio_codec |
The audio encoder for the splicing. If this
fails, try |
video_codec |
The video encoder for the splicing. If this
fails, see |
cleanup |
If |
The output from ari_spin
## Not run: # ari_narrate(system.file("test", "ari_intro_script.md", package = "ari"), system.file("test", "ari_intro.html", package = "ari"), voice = "Joey" ) ## End(Not run)
## Not run: # ari_narrate(system.file("test", "ari_intro_script.md", package = "ari"), system.file("test", "ari_intro.html", package = "ari"), voice = "Joey" ) ## End(Not run)
Given equal length vectors of paths to images (preferably .jpg
s
or .png
s) and strings which will be
synthesized by
Amazon Polly or
any other synthesizer available in
tts
, this function creates an
.mp4
video file where each image is shown with
its corresponding narration. This function uses ari_stitch
to
create the video.
ari_spin( images, paragraphs, output = tempfile(fileext = ".mp4"), voice = text2speech::tts_default_voice(service = service), service = ifelse(have_polly(), "amazon", "google"), subtitles = FALSE, duration = NULL, tts_args = NULL, key_or_json_file = NULL, ... ) have_polly()
ari_spin( images, paragraphs, output = tempfile(fileext = ".mp4"), voice = text2speech::tts_default_voice(service = service), service = ifelse(have_polly(), "amazon", "google"), subtitles = FALSE, duration = NULL, tts_args = NULL, key_or_json_file = NULL, ... ) have_polly()
images |
A vector of paths to images. |
paragraphs |
A vector strings that will be spoken by Amazon Polly. |
output |
A path to the video file which will be created. |
voice |
The voice you want to use. See
|
service |
speech synthesis service to use,
passed to |
subtitles |
Should a |
duration |
a vector of numeric durations for each audio
track. See |
tts_args |
list of arguments to pass to |
key_or_json_file |
access key or JSON file to pass to
|
... |
additional arguments to |
This function needs to connect to
Amazon Web Services in order to create the
narration. You can find a guide for accessing AWS from R
here.
For more information about how R connects
to Amazon Polly see the aws.polly
documentation
here.
The output from ari_stitch
## Not run: slides <- system.file("test", c("mab2.png", "mab1.png"), package = "ari" ) sentences <- c( "Welcome to my very interesting lecture.", "Here are some fantastic equations I came up with." ) ari_spin(slides, sentences, voice = "Joey") ## End(Not run)
## Not run: slides <- system.file("test", c("mab2.png", "mab1.png"), package = "ari" ) sentences <- c( "Welcome to my very interesting lecture.", "Here are some fantastic equations I came up with." ) ari_spin(slides, sentences, voice = "Joey") ## End(Not run)
Given a vector of paths to images (preferably .jpg
s
or .png
s) and a flat list of Wave
s of equal
length this function will create an .mp4
video file where each image
is shown with its corresponding audio. Take a look at the
readWave
function if you want to import your audio
files into R. Please be sure that all images have the same dimensions.
ari_stitch( images, audio, output = tempfile(fileext = ".mp4"), verbose = FALSE, cleanup = TRUE, ffmpeg_opts = "", divisible_height = TRUE, audio_codec = get_audio_codec(), video_codec = get_video_codec(), video_sync_method = "2", audio_bitrate = NULL, video_bitrate = NULL, pixel_format = "yuv420p", fast_start = FALSE, deinterlace = FALSE, stereo_audio = TRUE, duration = NULL, video_filters = NULL, frames_per_second = NULL, check_inputs = TRUE )
ari_stitch( images, audio, output = tempfile(fileext = ".mp4"), verbose = FALSE, cleanup = TRUE, ffmpeg_opts = "", divisible_height = TRUE, audio_codec = get_audio_codec(), video_codec = get_video_codec(), video_sync_method = "2", audio_bitrate = NULL, video_bitrate = NULL, pixel_format = "yuv420p", fast_start = FALSE, deinterlace = FALSE, stereo_audio = TRUE, duration = NULL, video_filters = NULL, frames_per_second = NULL, check_inputs = TRUE )
images |
A vector of paths to images. |
audio |
A list of |
output |
A path to the video file which will be created. |
verbose |
print diagnostic messages. If > 1, then more are printed |
cleanup |
If |
ffmpeg_opts |
additional options to send to |
divisible_height |
Make height divisible by 2, which may be required if getting "height not divisible by 2" error. |
audio_codec |
The audio encoder for the splicing. If this
fails, try |
video_codec |
The video encoder for the splicing. If this
fails, see |
video_sync_method |
Video sync method. Should be "auto" or '"vfr"' or a numeric. See https://ffmpeg.org/ffmpeg.html. |
audio_bitrate |
Bit rate for audio. Passed to |
video_bitrate |
Bit rate for video. Passed to |
pixel_format |
pixel format to encode for 'ffmpeg'. |
fast_start |
Adding 'faststart' flags for YouTube and other sites, see https://trac.ffmpeg.org/wiki/Encode/YouTube |
deinterlace |
should the video be de-interlaced, see https://ffmpeg.org/ffmpeg-filters.html, generally for YouTube |
stereo_audio |
should the audio be forced to stereo, corresponds to '-ac 2' |
duration |
a vector of numeric durations for each audio
track. See |
video_filters |
any options that are passed to |
frames_per_second |
frames per second of the video, should be an integer |
check_inputs |
Should the inputs be checked? Almost always should
be |
This function uses FFmpeg
which you should be sure is installed before using this function. If running
Sys.which("ffmpeg")
in your R console returns an empty string after
installing FFmpeg then you should set the path to FFmpeg on you computer to
an environmental variable using Sys.setenv(ffmpeg = "path/to/ffmpeg")
.
The environmental variable will always override the result of
Sys.which("ffmpeg")
.
A logical value, with the attribute outfile
for the
output file.
## Not run: if (ffmpeg_version_sufficient()) { result <- ari_stitch( ari_example(c("mab1.png", "mab2.png")), list(tuneR::noise(), tuneR::noise()) ) result <- ari_stitch( ari_example(c("mab1.png", "mab2.png")), list(tuneR::noise(), tuneR::noise()), ffmpeg_opts = "-qscale 0", verbose = 2 ) # system2("open", attributes(result)$outfile) } ## End(Not run)
## Not run: if (ffmpeg_version_sufficient()) { result <- ari_stitch( ari_example(c("mab1.png", "mab2.png")), list(tuneR::noise(), tuneR::noise()) ) result <- ari_stitch( ari_example(c("mab1.png", "mab2.png")), list(tuneR::noise(), tuneR::noise()), ffmpeg_opts = "-qscale 0", verbose = 2 ) # system2("open", attributes(result)$outfile) } ## End(Not run)
A simple function for demoing how spoken text will sound.
ari_talk( paragraphs, output = tempfile(fileext = ".wav"), voice = text2speech::tts_default_voice(service = service), service = "amazon" )
ari_talk( paragraphs, output = tempfile(fileext = ".wav"), voice = text2speech::tts_default_voice(service = service), service = "amazon" )
paragraphs |
A vector strings that will be spoken by Amazon Polly. |
output |
A path to the audio file which will be created. |
voice |
The voice you want to use. See
|
service |
speech synthesis service to use,
passed to |
A Wave
output object, with the attribute outfile
of the output file name.
Get Codecs for ffmpeg
ffmpeg_codecs() ffmpeg_video_codecs() ffmpeg_audio_codecs() ffmpeg_muxers() ffmpeg_version() ffmpeg_version_sufficient() check_ffmpeg_version()
ffmpeg_codecs() ffmpeg_video_codecs() ffmpeg_audio_codecs() ffmpeg_muxers() ffmpeg_version() ffmpeg_version_sufficient() check_ffmpeg_version()
A 'data.frame' of codec names and capabilities
## Not run: if (ffmpeg_version_sufficient()) { ffmpeg_codecs() ffmpeg_video_codecs() ffmpeg_audio_codecs() } ## End(Not run)
## Not run: if (ffmpeg_version_sufficient()) { ffmpeg_codecs() ffmpeg_video_codecs() ffmpeg_audio_codecs() } ## End(Not run)
Convert Files using FFMPEG
ffmpeg_convert( file, outfile = tempfile(fileext = paste0(".", tools::file_ext(file))), overwrite = TRUE, args = NULL )
ffmpeg_convert( file, outfile = tempfile(fileext = paste0(".", tools::file_ext(file))), overwrite = TRUE, args = NULL )
file |
Video/PNG file to convert |
outfile |
output file |
overwrite |
should output file be overwritten? |
args |
arguments to pass to |
A character string of the output file with different attributes
pngfile <- tempfile(fileext = ".png") png(pngfile) plot(0, 0) dev.off() if (have_ffmpeg_exec()) { res <- ffmpeg_convert(pngfile) }
pngfile <- tempfile(fileext = ".png") png(pngfile) plot(0, 0) dev.off() if (have_ffmpeg_exec()) { res <- ffmpeg_convert(pngfile) }
Check error output from individual video
ffmpeg_error_log(file, verbose = TRUE)
ffmpeg_error_log(file, verbose = TRUE)
file |
path to video |
verbose |
print diagnostic messages |
The output of the error log
Get Path to ffmpeg Executable
ffmpeg_exec(quote = FALSE) have_ffmpeg_exec()
ffmpeg_exec(quote = FALSE) have_ffmpeg_exec()
quote |
should |
The path to the ffmpeg
executable, or an error.
This looks using 'Sys.getenv("ffmpeg")' and 'Sys.which("ffmpeg")' to find 'ffmpeg'. If 'ffmpeg' is not in your PATH, then please set the path to 'ffmpeg' using 'Sys.setenv(ffmpeg = "/path/to/ffmpeg")'
## Not run: if (have_ffmpeg_exec()) { ffmpeg_exec() } ## End(Not run)
## Not run: if (have_ffmpeg_exec()) { ffmpeg_exec() } ## End(Not run)
Pad Wave Objects
pad_wav(wav, duration = NULL)
pad_wav(wav, duration = NULL)
wav |
list of Wave objects |
duration |
If |
A list of Wave objects, same length as input wav
wavs <- list( tuneR::noise(duration = 1.85 * 44100), tuneR::noise() ) out <- pad_wav(wavs) dur <- sapply(out, function(x) length(x@left) / [email protected]) duration <- c(2, 2) out <- pad_wav(wavs, duration = duration) dur <- sapply(out, function(x) length(x@left) / [email protected]) stopifnot(all(dur == duration)) duration <- c(2, 2.5) out <- pad_wav(wavs, duration = duration) dur <- sapply(out, function(x) length(x@left) / [email protected]) stopifnot(isTRUE(all.equal(dur, duration)))
wavs <- list( tuneR::noise(duration = 1.85 * 44100), tuneR::noise() ) out <- pad_wav(wavs) dur <- sapply(out, function(x) length(x@left) / x@samp.rate) duration <- c(2, 2) out <- pad_wav(wavs, duration = duration) dur <- sapply(out, function(x) length(x@left) / x@samp.rate) stopifnot(all(dur == duration)) duration <- c(2, 2.5) out <- pad_wav(wavs, duration = duration) dur <- sapply(out, function(x) length(x@left) / x@samp.rate) stopifnot(isTRUE(all.equal(dur, duration)))
Set Default Audio and Video Codecs
set_audio_codec(codec) set_video_codec(codec = "libx264") get_audio_codec() get_video_codec() audio_codec_encode(codec) video_codec_encode(codec)
set_audio_codec(codec) set_video_codec(codec = "libx264") get_audio_codec() get_video_codec() audio_codec_encode(codec) video_codec_encode(codec)
codec |
The codec to use or get for audio/video. Uses the 'ffmpeg_audio_codec' and 'ffmpeg_video_codec' options to store this information. |
A 'NULL' output
[ffmpeg_codecs()] for options
## Not run: if (have_ffmpeg_exec()) { print(ffmpeg_version()) get_audio_codec() set_audio_codec(codec = "libfdk_aac") get_audio_codec() set_audio_codec(codec = "aac") get_audio_codec() } if (have_ffmpeg_exec()) { get_video_codec() set_video_codec(codec = "libx265") get_video_codec() set_video_codec(codec = "libx264") get_video_codec() } ## empty thing if (have_ffmpeg_exec()) { video_codec_encode("libx264") audio_codec_encode("aac") } ## End(Not run)
## Not run: if (have_ffmpeg_exec()) { print(ffmpeg_version()) get_audio_codec() set_audio_codec(codec = "libfdk_aac") get_audio_codec() set_audio_codec(codec = "aac") get_audio_codec() } if (have_ffmpeg_exec()) { get_video_codec() set_video_codec(codec = "libx265") get_video_codec() set_video_codec(codec = "libx264") get_video_codec() } ## empty thing if (have_ffmpeg_exec()) { video_codec_encode("libx264") audio_codec_encode("aac") } ## End(Not run)