TTLG|Thief|Bioshock|System Shock|Deus Ex|Mobile
Results 1 to 7 of 7

Thread: Programmers wanted: Picking apart the sound metafiles

  1. #1
    Member
    Registered: Dec 2000
    Location: Finland

    Programmers wanted: Picking apart the sound metafiles

    I have been making custom conversations and I recently rediscovered that we need .lad files to properly create custom conversations. Those files hold the lipsinc data and apparently they also hold atleast links to all the other animation data that is used when the AI speaks. I did some research before and I discovered that Ion Storm used a program called AniMeter to produce the lad files:

    Edit: Apparently, and correct me if I'm wrong, but Ion Storm used LipSinc's AniMeter to produce the lipsinc animations. This is according to this article. Lipsinc unfortunetly ceased to exist in 2002. I assume Lipsinc's Impersonator is the latest version of it (I can't seem to find AniMeter anywhere), but it doesn't seem to be able to export .lad files.
    Well, I bought UT2004 and installed the Impersonator. No luck. It seems that we need someone to create a new tool which could create these lad files, but first I think it would be sufficient to have a tool that could extract the existing lad files from the sound metafiles. Sadly, I'm not a programmer. I would try doing it, if my programming skills weren't at the "Hello world" level.

    Would someone be willing to take this task? To create a tool that could produce or atleast extract the existing .lad files from the sound metafiles?

  2. #2
    Administrator
    Registered: Sep 2001
    Location: above the clouds
    Sounds like a very non-trivial task. Reminds me of the GOLEM thing Legend patched into Unreal 2. Without the original tool things get very difficult, but someone may have some ideas.

  3. #3
    Member
    Registered: Dec 2000
    Location: Finland
    Since I don't know anything about programming, I can't truly understand how hard it will be to do, but I imagine it to be extremely difficult.

    I'm not asking this for myself. The T3ed community is eventually going to need it, if we wish to live and create our FM's to T3. I consider conversations to be a very important part of the thief games and currently it completely ruins the mood when you see two guys just standing there and "talking" to each other (and you can't even tell which one is talking).

    If such simple things aren't solved eventually, we should probably just give up and move to creating FM's for the Dark Mod. I hope it doesn't come to that (no offence to the DM team).

  4. #4
    Member
    Registered: Jan 2001
    Location: Exiled in sassenachland
    You can glean a little info from the wayback machine, OC3 Entertainment seem to hold the technology but you probably knew that.

  5. #5
    Member
    Registered: Mar 2005
    Quote Originally Posted by scumble
    Sounds like a very non-trivial task.
    Actually it's easy. I wrote a script for New Horizon's project that extracted the audio data (although I don't think it was used in the end), and I was able to identify the LAD data although I never extracted it.

    Whether I can be bothered to complete the script to extract LAD is another question. Alternatively if anybody knows (or is willing to learn) Python I can send them the script for them to hack/improve/finish.

  6. #6
    Member
    Registered: Dec 2000
    Location: Finland
    Yeah. I knew about that site. The problem is that I can't find lipsinc controller files (.lbp files). The controller files that came with Thief3Ed are binary .lbd files. Actually it might be sufficient to have someone decompile the .lbd files to .lbp files (assuming the .lbd are just compiled .lbp files), that might be enough for anyone to create .lad files with UT2004.

    I also noted that I was wrong about the animation part. The AI's seem to play random animations in conversations. Still, I would really like to see their lips move aswell.

  7. #7
    Member
    Registered: Jan 2001
    Location: Exiled in sassenachland
    The hacky way of doing it is to just perform motions a la T2, but you also have the problems of AIs playing their idle motions and sounds. I've tried using Bark Points but they seem to have a probability built in where the AI may or may not play the sound. I suppose using custom conversations (with or without lipsync) is the way to go.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •