Vsee download pc
If you have any technical issue, please visit our FAQ page. Run Video Test. If you have any technical issues, please visit our FAQ page. The application should begin downloading shortly. If it does not, please click here to start the download. Please click Launch to start the video application. The application is taking longer than usual to start. If Vsee does not launch, please try again or set up VSee application.
Download VSee. Proceed to Consultation. Have you installed VSee? Installation Instructions Please follow the following instructions to download and install VSee. You will need a webcam and a microphone or headset for your video conference. Run Video Test You may be prompted with some security warnings.
Please proceed if prompted. Setup Instructions -1? VRM conversion is a two step process. After the first export, you have to put the VRM file back into your Unity project to actually set up the VRM blend shape clips and other things.
You can follow the guide on the VRM website, which is very detailed with many screenshots. N versions of Windows are missing some multimedia features. First make sure your Windows is updated and then install the media feature pack. Right click it, select Extract All You should have a new folder called VSeeFace. Inside there should be a file called VSeeFace with a blue icon, like the logo on this site. Double click on that to run VSeeFace. Sometimes even things that are not very face-like at all might get picked up.
A good way to check is to run the run. It will show you the camera image with tracking points. If green tracking points show up somewhere on the background while you are not in the view of the camera, that might be the cause.
Just make sure to close VSeeFace and any other programs that might be accessing the camera first. Beyond that, just give it a try and see how it runs. Face tracking can be pretty resource intensive, so if you want to run a game and stream at the same time, you may need a somewhat beefier PC for that. There is some performance tuning advice at the bottom of this page. Although, if you are very experienced with Linux and wine as well, you can try following these instructions for running it on Linux.
It would be quite hard to add as well, because OpenSeeFace is only designed to work with regular RGB webcam images for tracking. Before looking at new webcams, make sure that your room is well lit. It should be basically as bright as possible. At the same time, if you are wearing glsases, avoid positioning light sources in a way that will cause reflections on your glasses when seen from the angle of the camera.
One thing to note is that insufficient light will usually cause webcams to quietly lower their frame rate. For example, my camera will only give me 15 fps even when set to 30 fps unless I have bright daylight coming in through the window, in which case it may go up to 20 fps. You can check the actual camera framerate by looking at the TR tracking rate value in the lower right corner of VSeeFace, although in some cases this value might be bottlenecked by CPU speed rather than the webcam.
As far as resolution is concerned, the sweet spot is p to p. Running the camera at lower resolutions like x can still be fine, but results will be a bit more jittery and things like eye tracking will be less accurate. By default, VSeeFace caps the camera framerate at 30 fps, so there is not much point in getting a webcam with a higher maximum framerate.
While there is an option to remove this cap, actually increasing the tracking framerate to 60 fps will only make a very tiny difference with regards to how nice things look, but it will double the CPU usage of the tracking process. However, the fact that a camera is able to do 60 fps might still be a plus with respect to its general quality level. Having a ring light on the camera can be helpful with avoiding tracking issues because it is too dark, but it can also cause issues with reflections on glasses and can feel uncomfortable.
With USB2, the images captured by the camera will have to be compressed e. While there are free tiers for Live2D integration licenses, adding Live2D support to VSeeFace would only make sense if people could load their own models. Try setting the camera settings on the VSeeFace starting screen to default settings. The selection will be marked in red, but you can ignore that and press start anyways.
It usually works this way. You can enable the virtual camera in VSeeFace, set a single colored background image and add the VSeeFace camera as a source, then going to the color tab and enabling a chroma key with the color corresponding to the background image.
Note that this may not give as clean results as capturing in OBS with proper alpha transparency. Please note that the camera needs to be reenabled every time you start VSeeFace unless the option to keep it enabled is enabled. This option can be found in the advanced settings section.
It uses paid assets from the Unity asset store that cannot be freely redistributed. However, the actual face tracking and avatar animation code is open source. You can find it here and here. You can try something like this:. VRoid 1. You can configure it in Unity instead, as described in this video. The virtual camera can be used to use VSeeFace for teleconferences, Discord calls and similar. It can also be used in situations where using a game capture is not possible or very slow, due to specific laptop hardware setups.
To use the virtual camera, you have to enable it in the General settings. For performance reasons, it is disabled again after closing the program. Starting with version 1. When using it for the first time, you first have to install the camera driver by clicking the installation button in the virtual camera section of the General settings.
This should open an UAC prompt asking for permission to make changes to your computer, which is required to set up the virtual camera. If no such prompt appears and the installation fails, starting VSeeFace with administrator permissions may fix this, but it is not generally recommended.
After a successful installation, the button will change to an uninstall button that allows you to remove the virtual camera from your system. After installation, it should appear as a regular webcam. The virtual camera only supports the resolution x Changing the window size will most likely lead to undesirable results, so it is recommended that the Allow window resizing option be disabled while using the virtual camera.
The virtual camera supports loading background images, which can be useful for vtuber collabs over discord calls, by setting a unicolored background.
Should you encounter strange issues with with the virtual camera and have previously used it with a version of VSeeFace earlier than 1. If supported by the capture program, the virtual camera can be used to output video with alpha transparency. To make use of this, a fully transparent PNG needs to be loaded as the background image.
Partially transparent backgrounds are supported as well. Please note that using partially transparent background images with a capture program that do not support RGBA webcams can lead to color errors. Apparently, the Twitch video capturing app supports it by default.
The important settings are:. As the virtual camera keeps running even while the UI is shown, using it instead of a game capture can be useful if you often make changes to settings during a stream. It is possible to perform the face tracking on a separate PC. This can, for example, help reduce CPU load. This process is a bit advanced and requires some general knowledge about the use of commandline programs and batch files.
Inside this folder is a file called run. Running this file will open first ask for some information to set up the camera and then run the tracker process that is usually run in the background of VSeeFace. If you entered the correct information, it will show an image of the camera feed with overlaid tracking points, so do not run it while streaming your desktop. This can also be useful to figure out issues with the camera or tracking in general. The tracker can be stopped with the q , while the image display window is active.
To use it for network tracking, edit the run. If you would like to disable the webcam image display, you can change -v 3 to -v 0. When starting this modified file, in addition to the camera information, you will also have to enter the local network IP address of the PC A.
When no tracker process is running, the avatar in VSeeFace will simply not move. Press the start button. If you are sure that the camera number will not change and know a bit about batch files, you can also modify the batch file to remove the interactive input and just hard code the values. You can set up VSeeFace to recognize your facial expressions and automatically trigger VRM blendshape clips in response. There are two different modes that can be selected in the General settings.
This mode is easy to use, but it is limited to the Fun , Angry and Surprised expressions. Simply enable it and it should work. There are two sliders at the bottom of the General settings that can be used to adjust how it works. To trigger the Fun expression, smile, moving the corners of your mouth upwards. To trigger the Angry expression, do not smile and move your eyebrows down. To trigger the Surprised expression, move your eyebrows up. To use it, you first have to teach the program how your face will look for each expression, which can be tricky and take a bit of time.
The following video will explain the process:. When the Calibrate button is pressed, most of the recorded data is used to train a detection system. The rest of the data will be used to verify the accuracy. This will result in a number between 0 everything was misdetected and 1 everything was detected correctly and is displayed above the calibration button.
A good rule of thumb is to aim for a value between 0. While this might be unexpected, a value of 1 or very close to 1 is not actually a good thing and usually indicates that you need to record more data. A value significantly below 0. If this happens, either reload your last saved calibration or restart from the beginning.
It is also possible to set up only a few of the possible expressions. This usually improves detection accuracy. However, make sure to always set up the Neutral expression. This expression should contain any kind of expression that should not as one of the other expressions.
To remove an already set up expression, press the corresponding Clear button and then Calibrate. Having an expression detection setup loaded can increase the startup time of VSeeFace even if expression detection is disabled or set to simple mode. To avoid this, press the Clear calibration button, which will clear out all calibration data and preventing it from being loaded at startup.
You can always load your detection setup again using the Load calibration button. VSeeFace both supports sending and receiving motion data humanoid bone rotations, root offset, blendshape values using the VMC protocol introduced by Virtual Motion Capture.
If both sending and receiving are enabled, sending will be done after received data has been applied. In this case, make sure that VSeeFace is not sending data to itself, i. When receiving motion data, VSeeFace can additionally perform its own tracking and apply it.
If only Track fingers and Track hands to shoulders are enabled, the Leap Motion tracking will be applied, but camera tracking will remain disabled. If any of the other options are enabled, camera based tracking will be enabled and the selected parts of it will be applied to the avatar. Please note that received blendshape data will not be used for expression detection and that, if received blendshapes are applied to a model, triggering expressions via hotkeys will not work.
You can find a list of applications with support for the VMC protocol here. This video by Suvidriel explains how to set this up with Virtual Motion Capture. Using the prepared Unity project and scene , pose data will be sent over VMC protocol while the scene is being played. If an animator is added to the model in the scene, the animation will be transmitted, otherwise it can be posed manually as well.
For best results, it is recommended to use the same models in both VSeeFace and the Unity scene. Perfect sync blendshape information and tracking data can be received from the iFacialMocap and FaceMotion3D applications. For this to work properly, it is necessary for the avatar to have the necessary 52 ARKit blendshapes. The avatar should now move according to the received data, according to the settings below. You should see the packet counter counting up. If the packet counter does not count up, data is not being received at all, indicating a network or firewall issue.
Certain iPhone apps like Waidayo can send perfect sync blendshape information over the VMC protocol, which VSeeFace can receive, allowing you to use iPhone based face tracking. This requires an especially prepared avatar containing the necessary blendshapes.
A list of these blendshapes can be found here. You can find an example avatar containing the necessary blendshapes here. Enabling all over options except Track face features as well, will apply the usual head tracking and body movements, which may allow more freedom of movement than just the iPhone tracking on its own. If the tracking remains on, this may be caused by expression detection being enabled. In this case, additionally set the expression detection setting to none. A full Japanese guide can be found here.
The following gives a short English language summary. You can do this by dragging in the. It should now get imported. To do so, load this project into Unity Unity should import it automatically. You can then delete the included Vita model from the the scene and add your own avatar by dragging it into the Hierarchy section on the left. You can now start the Neuron software and set it up for transmitting BVH data on port Once this is done, press play in Unity to play the scene.
If no red text appears, the avatar should have been set up correctly and should be receiving tracking data from the Neuron software, while also sending the tracking data over VMC protocol. Next, you can start VSeeFace and set up the VMC receiver according to the port listed in the message displayed in the game view of the running Unity scene. Once enabled, it should start applying the motion tracking data from the Neuron to the avatar in VSeeFace. The provided project includes NeuronAnimator by Keijiro Takahashi and uses it to receive the tracking data from the Perception Neuron software and apply it to the avatar.
ThreeDPoseTracker allows webcam based full body tracking. While the ThreeDPoseTracker application can be used freely for non-commercial and commercial uses, the source code is for non-commercial use only. It allows transmitting its pose data using the VMC protocol, so by enabling VMC receiving in VSeeFace, you can use its webcam based fully body tracking to animate your avatar.
From what I saw, it is set up in such a way that the avatar will face away from the camera in VSeeFace, so you will most likely have to turn the lights and camera around. If you are working on an avatar, it can be useful to get an accurate idea of how it will look in VSeeFace before exporting the VRM.
You can load this example project into Unity After loading the project in Unity, load the provided scene inside the Scenes folder. If you press play, it should show some instructions on how to use it.
If you prefer settings things up yourself, the following settings in Unity should allow you to get an accurate idea of how the avatar will look with default settings in VSeeFace:. If you enabled shadows in the VSeeFace light settings, set the shadow type on the directional light to soft. To see the model with better light and shadow quality, use the Game view. It is possible to translate VSeeFace into different languages and I am happy to add contributed translations!
The language code should usually be given in two lowercase letters, but can be longer in special cases. For a partial reference of language codes, you can refer to this list. Now you can edit this new file and translate the "text" parts of each entry into your language.
New languages should automatically appear in the language selection menu in VSeeFace, so you can check how your translation looks inside the program.
Note that a JSON syntax error might lead to your whole file not loading correctly. In this case, you may be able to find the position of the error, by looking into the Player. Generally, your translation has to be enclosed by doublequotes "like this".
Some people have gotten VSeeFace to run on Linux through wine and it might be possible on Mac as well, but nobody tried, to my knowledge. However, reading webcams is not possible through wine versions before 6. Starting with wine 6, you can try just using it normally.
For previous versions or if webcam reading does not work properly, as a workaround, you can set the camera in VSeeFace to [OpenSeeFace tracking] and run the facetracker. To do this, you will need a Python 3. To set up everything for the facetracker. To run the tracker, first enter the OpenSeeFace directory and activate the virtual environment for the current session:.
Running this command, will send the tracking data to a UDP port on localhost, on which VSeeFace will listen to receive the tracking data. The -c argument specifies which camera should be used, with the first being 0 , while -W and -H let you specify the resolution. To see the webcam image with tracking points overlaid on your face, you can add the arguments -v 3 -P 1 somewhere.
Notes on running wine: First make sure you have the Arial font installed. You can put Arial. Secondly, make sure you have the 64bit version of wine installed. It often comes in a package called wine Also make sure that you are using a 64bit wine prefix.
To disable wine mode and make things work like on Windows, --disable-wine-mode can be used. It reportedly can cause this type of issue. If an error appears after pressing the Start button, please confirm that the VSeeFace folder is correctly unpacked. Previous causes have included:. If no window with a graphical user interface appears, please confirm that you have downloaded VSeeFace and not OpenSeeFace, which is just a backend library.
If you get an error message that the tracker process has disappeared, first try to follow the suggestions given in the error. If none of them help, press the Open logs button. If an error like the following:.
These Windows N editions mostly distributed in Europe are missing some necessary multimedia libraries. Follow these steps to install them. Before running it, make sure that no other program, including VSeeFace, is using the camera. After starting it, you will first see a list of cameras, each with a number in front of it. Enter the number of the camera you would like to check and press enter.
Next, it will ask you to select your camera settings as well as a frame rate. You can enter -1 to use the camera defaults and 24 as the frame rate. Press enter after entering each value.
After this, a second window should open, showing the image captured by your camera. If your face is visible on the image, you should see red and yellow tracking dots marked on your face. You can use this to make sure your camera is working as expected, your room has enough light, there is no strong light from the background messing up the image and so on.
If the tracking points accurately track your face, the tracking should work in VSeeFace as well. If you would like to see the camera image while your avatar is being animated, you can start VSeeFace while run. It should receive the tracking data from the active run. To figure out a good combination, you can try adding your webcam as a video source in OBS and play with the parameters resolution and frame rate to find something that works.