Skip to main content
replaced http://security.stackexchange.com/ with https://security.stackexchange.com/
Source Link

I'm trying to understand ECC from an IT layman's perspective and am trying to separate the theory from the standards, and understand why certain features are implemented or not implementedunderstand why certain features are implemented or not implemented in the common ECC stacks.

Question

Namely, I'd like to know when should a non-standard ECC library (or related hardware) should be used versus when a standard ECC library should be used.

Second, I'd also like to know what trade-offs were considered in the common implementations such as DHE in MSFT or Java versus what's implemented in other software


What I think I understand so far:

  1. There are several mathematic properties that can be used in encryption

  2. Many protocols have been created on those properties to facilitate key exchange. The common standards are broken into the following categories

  • NIST - Only
  • The NIST / SECG overlap
  • SECG - Only
  • ECC Brainpool
  1. Real world implementation of the above properties and standards are dictated by:
  • Patent concerns on the math and the key exchange (Certicom)
  • Security through obscurity (some haven't been released, and some "compatible" derivatives have been created)
  • Government approval (why did the government approve it? Backdoors, patents, the country that 'invented' it?)
  1. What I haven't been able to figure out is:
  • Are some implementations faster, more secure, or more suited for (or against) hardware optimizations?

  • Are some standards preferred simply because a vendor (or government) paid for their patent fees and will not be legislated?

  1. For a given implementation above, is there any risk of (or benefit depending on whose side you're on) of patented techniques like:
  • implementation of curves over binary fields using normal bases;
  • point compression;
  • acceleration of Koblitz curves using the Frobenius endomorphism;
  • various optimization tricks on dedicated hardware architectures (FPGA, GPU, ASIC).

I'm trying to understand ECC from an IT layman's perspective and am trying to separate the theory from the standards, and understand why certain features are implemented or not implemented in the common ECC stacks.

Question

Namely, I'd like to know when should a non-standard ECC library (or related hardware) should be used versus when a standard ECC library should be used.

Second, I'd also like to know what trade-offs were considered in the common implementations such as DHE in MSFT or Java versus what's implemented in other software


What I think I understand so far:

  1. There are several mathematic properties that can be used in encryption

  2. Many protocols have been created on those properties to facilitate key exchange. The common standards are broken into the following categories

  • NIST - Only
  • The NIST / SECG overlap
  • SECG - Only
  • ECC Brainpool
  1. Real world implementation of the above properties and standards are dictated by:
  • Patent concerns on the math and the key exchange (Certicom)
  • Security through obscurity (some haven't been released, and some "compatible" derivatives have been created)
  • Government approval (why did the government approve it? Backdoors, patents, the country that 'invented' it?)
  1. What I haven't been able to figure out is:
  • Are some implementations faster, more secure, or more suited for (or against) hardware optimizations?

  • Are some standards preferred simply because a vendor (or government) paid for their patent fees and will not be legislated?

  1. For a given implementation above, is there any risk of (or benefit depending on whose side you're on) of patented techniques like:
  • implementation of curves over binary fields using normal bases;
  • point compression;
  • acceleration of Koblitz curves using the Frobenius endomorphism;
  • various optimization tricks on dedicated hardware architectures (FPGA, GPU, ASIC).

I'm trying to understand ECC from an IT layman's perspective and am trying to separate the theory from the standards, and understand why certain features are implemented or not implemented in the common ECC stacks.

Question

Namely, I'd like to know when should a non-standard ECC library (or related hardware) should be used versus when a standard ECC library should be used.

Second, I'd also like to know what trade-offs were considered in the common implementations such as DHE in MSFT or Java versus what's implemented in other software


What I think I understand so far:

  1. There are several mathematic properties that can be used in encryption

  2. Many protocols have been created on those properties to facilitate key exchange. The common standards are broken into the following categories

  • NIST - Only
  • The NIST / SECG overlap
  • SECG - Only
  • ECC Brainpool
  1. Real world implementation of the above properties and standards are dictated by:
  • Patent concerns on the math and the key exchange (Certicom)
  • Security through obscurity (some haven't been released, and some "compatible" derivatives have been created)
  • Government approval (why did the government approve it? Backdoors, patents, the country that 'invented' it?)
  1. What I haven't been able to figure out is:
  • Are some implementations faster, more secure, or more suited for (or against) hardware optimizations?

  • Are some standards preferred simply because a vendor (or government) paid for their patent fees and will not be legislated?

  1. For a given implementation above, is there any risk of (or benefit depending on whose side you're on) of patented techniques like:
  • implementation of curves over binary fields using normal bases;
  • point compression;
  • acceleration of Koblitz curves using the Frobenius endomorphism;
  • various optimization tricks on dedicated hardware architectures (FPGA, GPU, ASIC).
migrated to new site... get new tags
Source Link
makerofthings7
  • 2.6k
  • 1
  • 22
  • 37

I'm trying to understand ECC from an IT layman's perspective and am trying to separate the theory from the standards, and understand why certain features are implemented or not implemented in the common ECC stacks.

Question

Namely, I'd like to know when should a non-standard ECC library (or related hardware) should be used versus when a standard ECC library should be used.

Second, I'd also like to know what trade-offs were considered in the common implementations such as DHE in MSFT or Java versus what's implemented in other software


What I think I understand so far:

  1. There are several mathematic properties that can be used in encryption

  2. Many protocols have been created on those properties to facilitate key exchange. The common standards are broken into the following categories

  • NIST - Only
  • The NIST / SECG overlap
  • SECG - Only
  • ECC Brainpool
  1. Real world implementation of the above properties and standards are dictated by:
  • Patent concerns on the math and the key exchange (Certicom)
  • Security through obscurity (some haven't been released, and some "compatible" derivatives have been created)
  • Government approval (why did the government approve it? Backdoors, patents, the country that 'invented' it?)
  1. What I haven't been able to figure out is:
  • Are some implementations faster, more secure, or more suited for (or against) hardware optimizations?

  • Are some standards preferred simply because a vendor (or government) paid for their patent fees and will not be legislated?

  1. For a given implementation above, is there any risk of (or benefit depending on whose side you're on) of patented techniques like:
  • implementation of curves over binary fields using normal bases;
  • point compression;
  • acceleration of Koblitz curves using the Frobenius endomorphism;
  • various optimization tricks on dedicated hardware architectures (FPGA, GPU, ASIC).

I'm trying to understand ECC from an IT layman's perspective and am trying to separate the theory from the standards, and understand why certain features are implemented or not implemented in the common ECC stacks.

Question

Namely, I'd like to know when should a non-standard ECC library (or related hardware) should be used versus when a standard ECC library should be used.

Second, I'd also like to know what trade-offs were considered in the common implementations such as DHE in MSFT or Java versus what's implemented in other software


What I think I understand so far:

  1. There are several mathematic properties that can be used in encryption

  2. Many protocols have been created on those properties to facilitate key exchange. The common standards are broken into the following categories

  • NIST - Only
  • The NIST / SECG overlap
  • SECG - Only
  • ECC Brainpool
  1. Real world implementation of the above properties and standards are dictated by:
  • Patent concerns on the math and the key exchange (Certicom)
  • Security through obscurity (some haven't been released, and some "compatible" derivatives have been created)
  • Government approval (why did the government approve it? Backdoors, patents, the country that 'invented' it?)
  1. What I haven't been able to figure out is:
  • Are some implementations faster, more secure, or more suited for (or against) hardware optimizations?

  • Are some standards preferred simply because a vendor (or government) paid for their patent fees and will not be legislated?

  1. For a given implementation above, is there any risk of (or benefit depending on whose side you're on) of patented techniques like:
  • implementation of curves over binary fields using normal bases;
  • point compression;
  • acceleration of Koblitz curves using the Frobenius endomorphism;
  • various optimization tricks on dedicated hardware architectures (FPGA, ASIC).

I'm trying to understand ECC from an IT layman's perspective and am trying to separate the theory from the standards, and understand why certain features are implemented or not implemented in the common ECC stacks.

Question

Namely, I'd like to know when should a non-standard ECC library (or related hardware) should be used versus when a standard ECC library should be used.

Second, I'd also like to know what trade-offs were considered in the common implementations such as DHE in MSFT or Java versus what's implemented in other software


What I think I understand so far:

  1. There are several mathematic properties that can be used in encryption

  2. Many protocols have been created on those properties to facilitate key exchange. The common standards are broken into the following categories

  • NIST - Only
  • The NIST / SECG overlap
  • SECG - Only
  • ECC Brainpool
  1. Real world implementation of the above properties and standards are dictated by:
  • Patent concerns on the math and the key exchange (Certicom)
  • Security through obscurity (some haven't been released, and some "compatible" derivatives have been created)
  • Government approval (why did the government approve it? Backdoors, patents, the country that 'invented' it?)
  1. What I haven't been able to figure out is:
  • Are some implementations faster, more secure, or more suited for (or against) hardware optimizations?

  • Are some standards preferred simply because a vendor (or government) paid for their patent fees and will not be legislated?

  1. For a given implementation above, is there any risk of (or benefit depending on whose side you're on) of patented techniques like:
  • implementation of curves over binary fields using normal bases;
  • point compression;
  • acceleration of Koblitz curves using the Frobenius endomorphism;
  • various optimization tricks on dedicated hardware architectures (FPGA, GPU, ASIC).
Tweeted twitter.com/#!/StackCrypto/status/279526967395643392
Post Migrated Here from security.stackexchange.com (revisions)
Source Link
makerofthings7
  • 2.6k
  • 1
  • 22
  • 37

Is there a field guide to ECC for the IT Security layman?

I'm trying to understand ECC from an IT layman's perspective and am trying to separate the theory from the standards, and understand why certain features are implemented or not implemented in the common ECC stacks.

Question

Namely, I'd like to know when should a non-standard ECC library (or related hardware) should be used versus when a standard ECC library should be used.

Second, I'd also like to know what trade-offs were considered in the common implementations such as DHE in MSFT or Java versus what's implemented in other software


What I think I understand so far:

  1. There are several mathematic properties that can be used in encryption

  2. Many protocols have been created on those properties to facilitate key exchange. The common standards are broken into the following categories

  • NIST - Only
  • The NIST / SECG overlap
  • SECG - Only
  • ECC Brainpool
  1. Real world implementation of the above properties and standards are dictated by:
  • Patent concerns on the math and the key exchange (Certicom)
  • Security through obscurity (some haven't been released, and some "compatible" derivatives have been created)
  • Government approval (why did the government approve it? Backdoors, patents, the country that 'invented' it?)
  1. What I haven't been able to figure out is:
  • Are some implementations faster, more secure, or more suited for (or against) hardware optimizations?

  • Are some standards preferred simply because a vendor (or government) paid for their patent fees and will not be legislated?

  1. For a given implementation above, is there any risk of (or benefit depending on whose side you're on) of patented techniques like:
  • implementation of curves over binary fields using normal bases;
  • point compression;
  • acceleration of Koblitz curves using the Frobenius endomorphism;
  • various optimization tricks on dedicated hardware architectures (FPGA, ASIC).